Test Report: KVM_Linux_containerd 20591

                    
                      36ed4f4062413474f7b114ebc11d0835e79e9d46:2025-04-03:38987
                    
                

Test fail (1/328)

Order failed test Duration
286 TestNoKubernetes/serial/StartNoArgs 39.71
x
+
TestNoKubernetes/serial/StartNoArgs (39.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-901906 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-901906 --driver=kvm2  --container-runtime=containerd: signal: killed (39.455926668s)

                                                
                                                
-- stdout --
	* [NoKubernetes-901906] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-901906
	* Restarting existing kvm2 VM for "NoKubernetes-901906" ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-901906 --driver=kvm2  --container-runtime=containerd" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-901906 -n NoKubernetes-901906
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-901906 -n NoKubernetes-901906: exit status 6 (252.719027ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0403 19:17:33.486127  126784 status.go:458] kubeconfig endpoint: get endpoint: "NoKubernetes-901906" does not appear in /home/jenkins/minikube-integration/20591-80797/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-901906" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (39.71s)

                                                
                                    

Test pass (288/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 33.59
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 20.23
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.13
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 109
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 264.02
29 TestAddons/serial/Volcano 43.95
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 11.51
35 TestAddons/parallel/Registry 18.7
36 TestAddons/parallel/Ingress 21.27
37 TestAddons/parallel/InspektorGadget 10.98
38 TestAddons/parallel/MetricsServer 5.77
40 TestAddons/parallel/CSI 62.79
41 TestAddons/parallel/Headlamp 19.8
42 TestAddons/parallel/CloudSpanner 5.6
43 TestAddons/parallel/LocalPath 56.08
44 TestAddons/parallel/NvidiaDevicePlugin 6.6
45 TestAddons/parallel/Yakd 11.98
47 TestAddons/StoppedEnableDisable 91.25
48 TestCertOptions 52.64
49 TestCertExpiration 261.76
51 TestForceSystemdFlag 51.14
52 TestForceSystemdEnv 52.58
54 TestKVMDriverInstallOrUpdate 8.12
58 TestErrorSpam/setup 44.56
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.59
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 5.18
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 87.09
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.53
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
75 TestFunctional/serial/CacheCmd/cache/add_local 2.72
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 47.67
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.03
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 19.13
91 TestFunctional/parallel/DryRun 0.3
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 28.94
101 TestFunctional/parallel/SSHCmd 0.48
102 TestFunctional/parallel/CpCmd 1.34
103 TestFunctional/parallel/MySQL 28.87
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.43
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
113 TestFunctional/parallel/License 1.48
114 TestFunctional/parallel/ServiceCmd/DeployApp 21.25
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
119 TestFunctional/parallel/ImageCommands/ImageBuild 6.12
120 TestFunctional/parallel/ImageCommands/Setup 2.69
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.81
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.68
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.46
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
127 TestFunctional/parallel/ProfileCmd/profile_list 0.34
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
131 TestFunctional/parallel/MountCmd/any-port 10.69
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.57
134 TestFunctional/parallel/ServiceCmd/List 1.34
135 TestFunctional/parallel/MountCmd/specific-port 1.66
136 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
139 TestFunctional/parallel/ServiceCmd/Format 0.28
140 TestFunctional/parallel/ServiceCmd/URL 0.33
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 195.67
161 TestMultiControlPlane/serial/DeployApp 8.55
162 TestMultiControlPlane/serial/PingHostFromPods 1.18
163 TestMultiControlPlane/serial/AddWorkerNode 57.34
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
166 TestMultiControlPlane/serial/CopyFile 13.05
167 TestMultiControlPlane/serial/StopSecondaryNode 91.48
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
169 TestMultiControlPlane/serial/RestartSecondaryNode 42.79
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 483.34
172 TestMultiControlPlane/serial/DeleteSecondaryNode 6.88
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
174 TestMultiControlPlane/serial/StopCluster 183.13
175 TestMultiControlPlane/serial/RestartCluster 121.9
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
177 TestMultiControlPlane/serial/AddSecondaryNode 73.4
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
182 TestJSONOutput/start/Command 56.82
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.7
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.56
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 94.66
214 TestMountStart/serial/StartWithMountFirst 31.15
215 TestMountStart/serial/VerifyMountFirst 0.39
216 TestMountStart/serial/StartWithMountSecond 29.73
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.67
219 TestMountStart/serial/VerifyMountPostDelete 0.37
220 TestMountStart/serial/Stop 1.29
221 TestMountStart/serial/RestartStopped 26.2
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 113.24
226 TestMultiNode/serial/DeployApp2Nodes 7.34
227 TestMultiNode/serial/PingHostFrom2Pods 0.79
228 TestMultiNode/serial/AddNode 51.84
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.57
231 TestMultiNode/serial/CopyFile 7.21
232 TestMultiNode/serial/StopNode 2.29
233 TestMultiNode/serial/StartAfterStop 39.08
234 TestMultiNode/serial/RestartKeepsNodes 314.78
235 TestMultiNode/serial/DeleteNode 2.19
236 TestMultiNode/serial/StopMultiNode 181.87
237 TestMultiNode/serial/RestartMultiNode 93.21
238 TestMultiNode/serial/ValidateNameConflict 45.96
243 TestPreload 250.9
245 TestScheduledStopUnix 118.98
249 TestRunningBinaryUpgrade 203.67
251 TestKubernetesUpgrade 134.03
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestStartStop/group/old-k8s-version/serial/FirstStart 153.41
264 TestNoKubernetes/serial/StartWithK8s 96
265 TestNoKubernetes/serial/StartWithStopK8s 30.97
273 TestNetworkPlugins/group/false 3.11
277 TestNoKubernetes/serial/Start 60.78
278 TestStartStop/group/old-k8s-version/serial/DeployApp 11.53
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.55
280 TestStartStop/group/old-k8s-version/serial/Stop 91.54
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
282 TestNoKubernetes/serial/ProfileList 71.09
283 TestNoKubernetes/serial/Stop 1.42
284 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
285 TestStartStop/group/old-k8s-version/serial/SecondStart 396.1
288 TestPause/serial/Start 75.16
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.61
291 TestPause/serial/SecondStartNoReconfiguration 42.65
292 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
293 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
294 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.09
295 TestPause/serial/Pause 0.66
296 TestPause/serial/VerifyStatus 0.25
297 TestPause/serial/Unpause 0.66
298 TestPause/serial/PauseAgain 0.77
299 TestPause/serial/DeletePaused 0.77
300 TestPause/serial/VerifyDeletedResources 33.25
302 TestStartStop/group/embed-certs/serial/FirstStart 82.17
304 TestStartStop/group/no-preload/serial/FirstStart 101.85
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 319.38
307 TestStartStop/group/embed-certs/serial/DeployApp 12.31
308 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
309 TestStartStop/group/embed-certs/serial/Stop 91
310 TestStartStop/group/no-preload/serial/DeployApp 12.28
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
312 TestStartStop/group/no-preload/serial/Stop 91.02
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/embed-certs/serial/SecondStart 310.93
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/old-k8s-version/serial/Pause 2.42
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
320 TestStartStop/group/no-preload/serial/SecondStart 339.49
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12
322 TestStoppedBinaryUpgrade/Setup 3.8
323 TestStoppedBinaryUpgrade/Upgrade 125.88
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
325 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
326 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
328 TestStartStop/group/newest-cni/serial/FirstStart 61.03
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
331 TestStartStop/group/newest-cni/serial/Stop 2.32
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/newest-cni/serial/SecondStart 34.39
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
337 TestStartStop/group/newest-cni/serial/Pause 2.95
338 TestNetworkPlugins/group/auto/Start 55.91
339 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
340 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
341 TestNetworkPlugins/group/flannel/Start 96.42
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
344 TestStartStop/group/embed-certs/serial/Pause 2.53
345 TestNetworkPlugins/group/enable-default-cni/Start 126.95
346 TestNetworkPlugins/group/auto/KubeletFlags 0.3
347 TestNetworkPlugins/group/auto/NetCatPod 12.33
348 TestNetworkPlugins/group/auto/DNS 0.18
349 TestNetworkPlugins/group/auto/Localhost 0.15
350 TestNetworkPlugins/group/auto/HairPin 0.14
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.2
352 TestNetworkPlugins/group/bridge/Start 89.86
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
355 TestStartStop/group/no-preload/serial/Pause 3.09
356 TestNetworkPlugins/group/calico/Start 95.02
357 TestNetworkPlugins/group/flannel/ControllerPod 6
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
359 TestNetworkPlugins/group/flannel/NetCatPod 10.24
360 TestNetworkPlugins/group/flannel/DNS 0.2
361 TestNetworkPlugins/group/flannel/Localhost 0.13
362 TestNetworkPlugins/group/flannel/HairPin 0.13
363 TestNetworkPlugins/group/kindnet/Start 71.64
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
370 TestNetworkPlugins/group/bridge/NetCatPod 11.28
371 TestNetworkPlugins/group/custom-flannel/Start 74.57
372 TestNetworkPlugins/group/bridge/DNS 0.17
373 TestNetworkPlugins/group/bridge/Localhost 0.14
374 TestNetworkPlugins/group/bridge/HairPin 0.15
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.57
377 TestNetworkPlugins/group/calico/NetCatPod 9.22
378 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
379 TestNetworkPlugins/group/calico/DNS 0.17
380 TestNetworkPlugins/group/calico/Localhost 0.13
381 TestNetworkPlugins/group/calico/HairPin 0.13
382 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
383 TestNetworkPlugins/group/kindnet/NetCatPod 11.31
384 TestNetworkPlugins/group/kindnet/DNS 0.17
385 TestNetworkPlugins/group/kindnet/Localhost 0.15
386 TestNetworkPlugins/group/kindnet/HairPin 0.12
387 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
388 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
389 TestNetworkPlugins/group/custom-flannel/DNS 0.14
390 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
391 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (33.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-988376 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-988376 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (33.59229782s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (33.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0403 18:12:37.504783   88051 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0403 18:12:37.504956   88051 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-988376
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-988376: exit status 85 (58.584399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-988376 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |          |
	|         | -p download-only-988376        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 18:12:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 18:12:03.952007   88063 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:12:03.952259   88063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:03.952268   88063 out.go:358] Setting ErrFile to fd 2...
	I0403 18:12:03.952272   88063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:03.952437   88063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	W0403 18:12:03.952563   88063 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20591-80797/.minikube/config/config.json: open /home/jenkins/minikube-integration/20591-80797/.minikube/config/config.json: no such file or directory
	I0403 18:12:03.953111   88063 out.go:352] Setting JSON to true
	I0403 18:12:03.953965   88063 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6856,"bootTime":1743697068,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:12:03.954020   88063 start.go:139] virtualization: kvm guest
	I0403 18:12:03.956133   88063 out.go:97] [download-only-988376] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0403 18:12:03.956288   88063 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball: no such file or directory
	I0403 18:12:03.956354   88063 notify.go:220] Checking for updates...
	I0403 18:12:03.957483   88063 out.go:169] MINIKUBE_LOCATION=20591
	I0403 18:12:03.958941   88063 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:12:03.960226   88063 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	I0403 18:12:03.961419   88063 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	I0403 18:12:03.962576   88063 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0403 18:12:03.964650   88063 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0403 18:12:03.964842   88063 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:12:03.999970   88063 out.go:97] Using the kvm2 driver based on user configuration
	I0403 18:12:04.000021   88063 start.go:297] selected driver: kvm2
	I0403 18:12:04.000031   88063 start.go:901] validating driver "kvm2" against <nil>
	I0403 18:12:04.000446   88063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:04.000556   88063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-80797/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 18:12:04.015747   88063 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 18:12:04.015788   88063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 18:12:04.016342   88063 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0403 18:12:04.016476   88063 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0403 18:12:04.016506   88063 cni.go:84] Creating CNI manager for ""
	I0403 18:12:04.016558   88063 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0403 18:12:04.016568   88063 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 18:12:04.016615   88063 start.go:340] cluster config:
	{Name:download-only-988376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-988376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:12:04.016776   88063 iso.go:125] acquiring lock: {Name:mk04fc24d5717eba35bf5189ddac9d51cf3986a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:04.018518   88063 out.go:97] Downloading VM boot image ...
	I0403 18:12:04.018554   88063 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20591-80797/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0403 18:12:18.067898   88063 out.go:97] Starting "download-only-988376" primary control-plane node in "download-only-988376" cluster
	I0403 18:12:18.067943   88063 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0403 18:12:18.223297   88063 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0403 18:12:18.223339   88063 cache.go:56] Caching tarball of preloaded images
	I0403 18:12:18.223572   88063 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0403 18:12:18.225387   88063 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0403 18:12:18.225407   88063 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0403 18:12:19.030605   88063 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-988376 host does not exist
	  To start a cluster, run: "minikube start -p download-only-988376"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-988376
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (20.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-530061 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-530061 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (20.225366458s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (20.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0403 18:12:58.048262   88051 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0403 18:12:58.048302   88051 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-530061
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-530061: exit status 85 (61.383516ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-988376 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | -p download-only-988376        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| delete  | -p download-only-988376        | download-only-988376 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC | 03 Apr 25 18:12 UTC |
	| start   | -o=json --download-only        | download-only-530061 | jenkins | v1.35.0 | 03 Apr 25 18:12 UTC |                     |
	|         | -p download-only-530061        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/03 18:12:37
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0403 18:12:37.861340   88328 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:12:37.861580   88328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:37.861589   88328 out.go:358] Setting ErrFile to fd 2...
	I0403 18:12:37.861593   88328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:12:37.861752   88328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:12:37.862810   88328 out.go:352] Setting JSON to true
	I0403 18:12:37.863925   88328 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6890,"bootTime":1743697068,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:12:37.864030   88328 start.go:139] virtualization: kvm guest
	I0403 18:12:37.865987   88328 out.go:97] [download-only-530061] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 18:12:37.866104   88328 notify.go:220] Checking for updates...
	I0403 18:12:37.867261   88328 out.go:169] MINIKUBE_LOCATION=20591
	I0403 18:12:37.868614   88328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:12:37.869823   88328 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	I0403 18:12:37.871035   88328 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	I0403 18:12:37.872104   88328 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0403 18:12:37.874153   88328 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0403 18:12:37.874409   88328 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:12:37.905787   88328 out.go:97] Using the kvm2 driver based on user configuration
	I0403 18:12:37.905815   88328 start.go:297] selected driver: kvm2
	I0403 18:12:37.905821   88328 start.go:901] validating driver "kvm2" against <nil>
	I0403 18:12:37.906171   88328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:37.906263   88328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20591-80797/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0403 18:12:37.921932   88328 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0403 18:12:37.921978   88328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0403 18:12:37.922476   88328 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0403 18:12:37.922625   88328 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0403 18:12:37.922656   88328 cni.go:84] Creating CNI manager for ""
	I0403 18:12:37.922703   88328 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0403 18:12:37.922712   88328 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0403 18:12:37.922759   88328 start.go:340] cluster config:
	{Name:download-only-530061 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-530061 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:12:37.922850   88328 iso.go:125] acquiring lock: {Name:mk04fc24d5717eba35bf5189ddac9d51cf3986a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0403 18:12:37.924350   88328 out.go:97] Starting "download-only-530061" primary control-plane node in "download-only-530061" cluster
	I0403 18:12:37.924369   88328 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0403 18:12:38.151269   88328 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0403 18:12:38.151303   88328 cache.go:56] Caching tarball of preloaded images
	I0403 18:12:38.151502   88328 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0403 18:12:38.153062   88328 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0403 18:12:38.153076   88328 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0403 18:12:38.308703   88328 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:17ec4d97c92604221650726c3857ee2a -> /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0403 18:12:55.004409   88328 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0403 18:12:55.004499   88328 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20591-80797/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0403 18:12:55.753309   88328 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0403 18:12:55.753676   88328 profile.go:143] Saving config to /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/download-only-530061/config.json ...
	I0403 18:12:55.753718   88328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/download-only-530061/config.json: {Name:mk7e22f07cfde8f553916ccc374939a7dde83259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0403 18:12:55.753962   88328 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0403 18:12:55.754175   88328 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20591-80797/.minikube/cache/linux/amd64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-530061 host does not exist
	  To start a cluster, run: "minikube start -p download-only-530061"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-530061
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0403 18:12:58.615054   88051 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-477959 --alsologtostderr --binary-mirror http://127.0.0.1:35463 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-477959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-477959
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (109s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-881763 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-881763 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m48.119535078s)
helpers_test.go:175: Cleaning up "offline-containerd-881763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-881763
--- PASS: TestOffline (109.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245089
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-245089: exit status 85 (53.363815ms)

                                                
                                                
-- stdout --
	* Profile "addons-245089" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245089"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245089
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-245089: exit status 85 (51.733694ms)

                                                
                                                
-- stdout --
	* Profile "addons-245089" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245089"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (264.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-245089 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-245089 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m24.0152274s)
--- PASS: TestAddons/Setup (264.02s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 31.448649ms
addons_test.go:815: volcano-admission stabilized in 31.488833ms
addons_test.go:807: volcano-scheduler stabilized in 31.544154ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-lwbms" [545060c9-db66-4e78-8b28-2fce75a1aa99] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003864111s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-fcldk" [2b3b66e4-16b7-409c-a605-6234d5ac98ae] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005217119s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-78r46" [07ecf887-2d1f-41ab-84f6-4220de7711ca] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003958439s
addons_test.go:842: (dbg) Run:  kubectl --context addons-245089 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-245089 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-245089 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [bdb7dc4d-879e-438b-8397-9a5deefa78ed] Pending
helpers_test.go:344: "test-job-nginx-0" [bdb7dc4d-879e-438b-8397-9a5deefa78ed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [bdb7dc4d-879e-438b-8397-9a5deefa78ed] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004284133s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable volcano --alsologtostderr -v=1: (11.577141554s)
--- PASS: TestAddons/serial/Volcano (43.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-245089 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-245089 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-245089 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-245089 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85ad39b1-cd43-4372-8647-cc8fbb4153a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85ad39b1-cd43-4372-8647-cc8fbb4153a2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003533963s
addons_test.go:633: (dbg) Run:  kubectl --context addons-245089 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-245089 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-245089 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.367808ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0403 18:18:27.796175   88051 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0403 18:18:27.796200   88051 kapi.go:107] duration metric: took 4.541294ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-6c88467877-qc8bd" [fbfc6796-6ab4-4b07-87da-fe31980c4ece] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002734213s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s52gl" [02b0bba4-544d-4650-8e58-9f8281e0bf71] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003987064s
addons_test.go:331: (dbg) Run:  kubectl --context addons-245089 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-245089 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-245089 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.829193959s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 ip
2025/04/03 18:18:45 [DEBUG] GET http://192.168.39.124:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-245089 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-245089 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-245089 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a024c1c4-4dbe-4ff3-900e-b142b37a6876] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a024c1c4-4dbe-4ff3-900e-b142b37a6876] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004158205s
I0403 18:18:58.010547   88051 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-245089 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.124
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable ingress-dns --alsologtostderr -v=1: (1.143821736s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable ingress --alsologtostderr -v=1: (7.903208215s)
--- PASS: TestAddons/parallel/Ingress (21.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9v6dp" [06cc58de-c94c-46e2-b363-6b4ed7b88dcb] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.028364209s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable inspektor-gadget --alsologtostderr -v=1: (5.950131192s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.944814ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-npbpj" [5e767bf2-3a59-4f5b-a001-b9848dcb0fd9] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004654915s
addons_test.go:402: (dbg) Run:  kubectl --context addons-245089 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.551734ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-245089 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-245089 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [892e8a18-9eb6-4661-9d46-2c5a47c877a1] Pending
helpers_test.go:344: "task-pv-pod" [892e8a18-9eb6-4661-9d46-2c5a47c877a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [892e8a18-9eb6-4661-9d46-2c5a47c877a1] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.022012365s
addons_test.go:511: (dbg) Run:  kubectl --context addons-245089 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245089 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245089 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-245089 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-245089 delete pod task-pv-pod: (1.211497003s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-245089 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-245089 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-245089 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a9577278-10d3-493a-a18f-4ade67cf7580] Pending
helpers_test.go:344: "task-pv-pod-restore" [a9577278-10d3-493a-a18f-4ade67cf7580] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a9577278-10d3-493a-a18f-4ade67cf7580] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003094437s
addons_test.go:553: (dbg) Run:  kubectl --context addons-245089 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-245089 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-245089 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.700296535s)
--- PASS: TestAddons/parallel/CSI (62.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-245089 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-hrhdx" [043a1e2a-d59c-40dc-adc9-490ec0f699fd] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-hrhdx" [043a1e2a-d59c-40dc-adc9-490ec0f699fd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-hrhdx" [043a1e2a-d59c-40dc-adc9-490ec0f699fd] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004282546s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable headlamp --alsologtostderr -v=1: (5.891021462s)
--- PASS: TestAddons/parallel/Headlamp (19.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-4ddkx" [a8f7c374-3142-468b-80fd-f8ddeb05a272] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011960849s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-245089 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-245089 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d0740afe-fb48-49f5-a3d1-73191a9b941f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d0740afe-fb48-49f5-a3d1-73191a9b941f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d0740afe-fb48-49f5-a3d1-73191a9b941f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003946721s
addons_test.go:906: (dbg) Run:  kubectl --context addons-245089 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 ssh "cat /opt/local-path-provisioner/pvc-c8649f65-56d8-4913-b948-0c25c0933ae0_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-245089 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-245089 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.228703559s)
--- PASS: TestAddons/parallel/LocalPath (56.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
I0403 18:18:27.791671   88051 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cp8zg" [210f8af7-3ed7-4ec0-a578-11d3b5e31311] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003066971s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-lt54k" [5b616fa9-4113-4441-bc0c-45f264794077] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.044821925s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-245089 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-245089 addons disable yakd --alsologtostderr -v=1: (5.93254077s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-245089
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-245089: (1m30.967169058s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245089
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245089
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-245089
--- PASS: TestAddons/StoppedEnableDisable (91.25s)

                                                
                                    
x
+
TestCertOptions (52.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-597128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-597128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (50.464130363s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-597128 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-597128 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-597128 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-597128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-597128
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-597128: (1.713676519s)
--- PASS: TestCertOptions (52.64s)

                                                
                                    
x
+
TestCertExpiration (261.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-351205 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-351205 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (52.332415799s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-351205 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-351205 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (28.384842797s)
helpers_test.go:175: Cleaning up "cert-expiration-351205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-351205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-351205: (1.037376875s)
--- PASS: TestCertExpiration (261.76s)

                                                
                                    
x
+
TestForceSystemdFlag (51.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-347710 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-347710 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (49.938348287s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-347710 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-347710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-347710
--- PASS: TestForceSystemdFlag (51.14s)

                                                
                                    
x
+
TestForceSystemdEnv (52.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-725741 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-725741 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (51.580797461s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-725741 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-725741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-725741
--- PASS: TestForceSystemdEnv (52.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0403 19:14:33.919344   88051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 19:14:33.919515   88051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0403 19:14:33.952231   88051 install.go:62] docker-machine-driver-kvm2: exit status 1
W0403 19:14:33.952426   88051 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0403 19:14:33.952526   88051 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851889168/001/docker-machine-driver-kvm2
I0403 19:14:34.554600   88051 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2851889168/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0004cfa18 gz:0xc0004cfaa0 tar:0xc0004cfa50 tar.bz2:0xc0004cfa60 tar.gz:0xc0004cfa70 tar.xz:0xc0004cfa80 tar.zst:0xc0004cfa90 tbz2:0xc0004cfa60 tgz:0xc0004cfa70 txz:0xc0004cfa80 tzst:0xc0004cfa90 xz:0xc0004cfaa8 zip:0xc0004cfab0 zst:0xc0004cfac0] Getters:map[file:0xc000a971b0 http:0xc0007153b0 https:0xc0007154a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0403 19:14:34.554673   88051 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851889168/001/docker-machine-driver-kvm2
I0403 19:14:38.626583   88051 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0403 19:14:38.626674   88051 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0403 19:14:38.656904   88051 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0403 19:14:38.656938   88051 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0403 19:14:38.656997   88051 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0403 19:14:38.657030   88051 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851889168/002/docker-machine-driver-kvm2
I0403 19:14:39.042150   88051 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2851889168/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0004cfa18 gz:0xc0004cfaa0 tar:0xc0004cfa50 tar.bz2:0xc0004cfa60 tar.gz:0xc0004cfa70 tar.xz:0xc0004cfa80 tar.zst:0xc0004cfa90 tbz2:0xc0004cfa60 tgz:0xc0004cfa70 txz:0xc0004cfa80 tzst:0xc0004cfa90 xz:0xc0004cfaa8 zip:0xc0004cfab0 zst:0xc0004cfac0] Getters:map[file:0xc001655050 http:0xc0006c4050 https:0xc0006c40f0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0403 19:14:39.042194   88051 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851889168/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.12s)

                                                
                                    
x
+
TestErrorSpam/setup (44.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-598986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-598986 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-598986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-598986 --driver=kvm2  --container-runtime=containerd: (44.55648339s)
--- PASS: TestErrorSpam/setup (44.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (5.18s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop: (1.618377341s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop: (2.054805349s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-598986 --log_dir /tmp/nospam-598986 stop: (1.507723805s)
--- PASS: TestErrorSpam/stop (5.18s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20591-80797/.minikube/files/etc/test/nested/copy/88051/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0403 18:22:23.291848   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.298254   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.309600   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.331082   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.372501   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.454029   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.615583   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:23.937399   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:24.579510   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:25.861127   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:28.424070   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:33.545616   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:22:43.787192   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:23:04.269089   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-138112 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m27.09391446s)
--- PASS: TestFunctional/serial/StartWithProxy (87.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0403 18:23:41.089819   88051 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --alsologtostderr -v=8
E0403 18:23:45.231132   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-138112 --alsologtostderr -v=8: (43.532763388s)
functional_test.go:680: soft start took 43.533411967s for "functional-138112" cluster.
I0403 18:24:24.622882   88051 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (43.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-138112 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-138112 /tmp/TestFunctionalserialCacheCmdcacheadd_local2418335570/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache add minikube-local-cache-test:functional-138112
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 cache add minikube-local-cache-test:functional-138112: (2.425458172s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache delete minikube-local-cache-test:functional-138112
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-138112
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.2919ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 kubectl -- --context functional-138112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-138112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0403 18:25:07.152618   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-138112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.669260334s)
functional_test.go:778: restart took 47.669411093s for "functional-138112" cluster.
I0403 18:25:19.992220   88051 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (47.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-138112 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 logs: (1.323372801s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 logs --file /tmp/TestFunctionalserialLogsFileCmd3883090254/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 logs --file /tmp/TestFunctionalserialLogsFileCmd3883090254/001/logs.txt: (1.336239835s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-138112 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-138112
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-138112: exit status 115 (269.446608ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.249:32252 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-138112 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 config get cpus: exit status 14 (61.090617ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 config get cpus: exit status 14 (60.848832ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-138112 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-138112 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 96808: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-138112 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (142.524522ms)

                                                
                                                
-- stdout --
	* [functional-138112] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:25:52.983745   97447 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:25:52.983838   97447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:25:52.983846   97447 out.go:358] Setting ErrFile to fd 2...
	I0403 18:25:52.983849   97447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:25:52.984039   97447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:25:52.984521   97447 out.go:352] Setting JSON to false
	I0403 18:25:52.985449   97447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7685,"bootTime":1743697068,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:25:52.985543   97447 start.go:139] virtualization: kvm guest
	I0403 18:25:52.987460   97447 out.go:177] * [functional-138112] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 18:25:52.988835   97447 notify.go:220] Checking for updates...
	I0403 18:25:52.988854   97447 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 18:25:52.989937   97447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:25:52.991065   97447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	I0403 18:25:52.992239   97447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	I0403 18:25:52.993310   97447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 18:25:52.994363   97447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 18:25:52.995796   97447 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 18:25:52.996158   97447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:25:52.996239   97447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:25:53.011723   97447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37957
	I0403 18:25:53.012242   97447 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:25:53.012782   97447 main.go:141] libmachine: Using API Version  1
	I0403 18:25:53.012807   97447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:25:53.013130   97447 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:25:53.013306   97447 main.go:141] libmachine: (functional-138112) Calling .DriverName
	I0403 18:25:53.013578   97447 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:25:53.014017   97447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:25:53.014079   97447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:25:53.031159   97447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0403 18:25:53.031688   97447 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:25:53.032236   97447 main.go:141] libmachine: Using API Version  1
	I0403 18:25:53.032258   97447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:25:53.032647   97447 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:25:53.032851   97447 main.go:141] libmachine: (functional-138112) Calling .DriverName
	I0403 18:25:53.071269   97447 out.go:177] * Using the kvm2 driver based on existing profile
	I0403 18:25:53.072465   97447 start.go:297] selected driver: kvm2
	I0403 18:25:53.072482   97447 start.go:901] validating driver "kvm2" against &{Name:functional-138112 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-138112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:25:53.072587   97447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 18:25:53.074509   97447 out.go:201] 
	W0403 18:25:53.075687   97447 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0403 18:25:53.076849   97447 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-138112 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-138112 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (136.512635ms)

                                                
                                                
-- stdout --
	* [functional-138112] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:25:41.347152   96744 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:25:41.347262   96744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:25:41.347273   96744 out.go:358] Setting ErrFile to fd 2...
	I0403 18:25:41.347279   96744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:25:41.347549   96744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:25:41.348069   96744 out.go:352] Setting JSON to false
	I0403 18:25:41.349042   96744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7673,"bootTime":1743697068,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 18:25:41.349143   96744 start.go:139] virtualization: kvm guest
	I0403 18:25:41.350705   96744 out.go:177] * [functional-138112] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0403 18:25:41.352340   96744 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 18:25:41.352372   96744 notify.go:220] Checking for updates...
	I0403 18:25:41.354574   96744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 18:25:41.355810   96744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	I0403 18:25:41.357101   96744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	I0403 18:25:41.358306   96744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 18:25:41.359402   96744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 18:25:41.361058   96744 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 18:25:41.361659   96744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:25:41.361760   96744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:25:41.377065   96744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34355
	I0403 18:25:41.377621   96744 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:25:41.378233   96744 main.go:141] libmachine: Using API Version  1
	I0403 18:25:41.378259   96744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:25:41.378637   96744 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:25:41.378815   96744 main.go:141] libmachine: (functional-138112) Calling .DriverName
	I0403 18:25:41.379069   96744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 18:25:41.379354   96744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:25:41.379393   96744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:25:41.394600   96744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44881
	I0403 18:25:41.395173   96744 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:25:41.395693   96744 main.go:141] libmachine: Using API Version  1
	I0403 18:25:41.395717   96744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:25:41.396026   96744 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:25:41.396222   96744 main.go:141] libmachine: (functional-138112) Calling .DriverName
	I0403 18:25:41.429368   96744 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0403 18:25:41.430539   96744 start.go:297] selected driver: kvm2
	I0403 18:25:41.430553   96744 start.go:901] validating driver "kvm2" against &{Name:functional-138112 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-138112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0403 18:25:41.430675   96744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 18:25:41.432508   96744 out.go:201] 
	W0403 18:25:41.433550   96744 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0403 18:25:41.434652   96744 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-138112 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-138112 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-t2qtr" [bc0c364e-c276-4d6c-aa42-1d1bd10db95d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-t2qtr" [bc0c364e-c276-4d6c-aa42-1d1bd10db95d] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003227524s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.249:30400
functional_test.go:1692: http://192.168.39.249:30400: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-t2qtr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.249:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.249:30400
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e3c189c4-eeb5-4ad8-9790-dbdb2210fdc4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004657968s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-138112 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-138112 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-138112 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d97f1ae-86c2-4bc1-bd0a-9bbc7fc6cb9c] Pending
helpers_test.go:344: "sp-pod" [7d97f1ae-86c2-4bc1-bd0a-9bbc7fc6cb9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d97f1ae-86c2-4bc1-bd0a-9bbc7fc6cb9c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003112555s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-138112 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-138112 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-138112 delete -f testdata/storage-provisioner/pod.yaml: (1.243133739s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-138112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aeae62e4-436b-46d5-9621-a094a7d8eefd] Pending
helpers_test.go:344: "sp-pod" [aeae62e4-436b-46d5-9621-a094a7d8eefd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aeae62e4-436b-46d5-9621-a094a7d8eefd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003475919s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-138112 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh -n functional-138112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cp functional-138112:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd378910685/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh -n functional-138112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh -n functional-138112 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-138112 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-trqfj" [56e52cf8-77fa-4439-9d20-ad4aa9254eac] Pending
helpers_test.go:344: "mysql-58ccfd96bb-trqfj" [56e52cf8-77fa-4439-9d20-ad4aa9254eac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-trqfj" [56e52cf8-77fa-4439-9d20-ad4aa9254eac] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003980509s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;": exit status 1 (174.778283ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0403 18:25:49.623766   88051 retry.go:31] will retry after 1.077238029s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;": exit status 1 (138.403821ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0403 18:25:50.840350   88051 retry.go:31] will retry after 1.748573588s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;": exit status 1 (155.397325ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0403 18:25:52.745168   88051 retry.go:31] will retry after 3.16182532s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-138112 exec mysql-58ccfd96bb-trqfj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/88051/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /etc/test/nested/copy/88051/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/88051.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /etc/ssl/certs/88051.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/88051.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /usr/share/ca-certificates/88051.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/880512.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /etc/ssl/certs/880512.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/880512.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /usr/share/ca-certificates/880512.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-138112 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "sudo systemctl is-active docker": exit status 1 (211.474834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "sudo systemctl is-active crio": exit status 1 (210.275809ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2305: (dbg) Done: out/minikube-linux-amd64 license: (1.483507358s)
--- PASS: TestFunctional/parallel/License (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-138112 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-138112 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-kjbf8" [5c77e7ba-0869-4597-ae22-e1c531be4c0a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-kjbf8" [5c77e7ba-0869-4597-ae22-e1c531be4c0a] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.005759889s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-138112 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-138112
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kicbase/echo-server:functional-138112
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-138112 image ls --format short --alsologtostderr:
I0403 18:25:53.577209   97598 out.go:345] Setting OutFile to fd 1 ...
I0403 18:25:53.577324   97598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:53.577334   97598 out.go:358] Setting ErrFile to fd 2...
I0403 18:25:53.577339   97598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:53.577539   97598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
I0403 18:25:53.578122   97598 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:53.578220   97598 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:53.578573   97598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:53.578624   97598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:53.594113   97598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
I0403 18:25:53.594568   97598 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:53.595169   97598 main.go:141] libmachine: Using API Version  1
I0403 18:25:53.595211   97598 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:53.595690   97598 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:53.595915   97598 main.go:141] libmachine: (functional-138112) Calling .GetState
I0403 18:25:53.598020   97598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:53.598063   97598 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:53.615407   97598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
I0403 18:25:53.615875   97598 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:53.616373   97598 main.go:141] libmachine: Using API Version  1
I0403 18:25:53.616412   97598 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:53.616741   97598 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:53.616933   97598 main.go:141] libmachine: (functional-138112) Calling .DriverName
I0403 18:25:53.617156   97598 ssh_runner.go:195] Run: systemctl --version
I0403 18:25:53.617261   97598 main.go:141] libmachine: (functional-138112) Calling .GetSSHHostname
I0403 18:25:53.620493   97598 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:53.621020   97598 main.go:141] libmachine: (functional-138112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:8c", ip: ""} in network mk-functional-138112: {Iface:virbr1 ExpiryTime:2025-04-03 19:22:29 +0000 UTC Type:0 Mac:52:54:00:72:51:8c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-138112 Clientid:01:52:54:00:72:51:8c}
I0403 18:25:53.621062   97598 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined IP address 192.168.39.249 and MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:53.621116   97598 main.go:141] libmachine: (functional-138112) Calling .GetSSHPort
I0403 18:25:53.621282   97598 main.go:141] libmachine: (functional-138112) Calling .GetSSHKeyPath
I0403 18:25:53.621444   97598 main.go:141] libmachine: (functional-138112) Calling .GetSSHUsername
I0403 18:25:53.621609   97598 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/functional-138112/id_rsa Username:docker}
I0403 18:25:53.706145   97598 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:25:53.763089   97598 main.go:141] libmachine: Making call to close driver server
I0403 18:25:53.763121   97598 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:53.763437   97598 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:53.763473   97598 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:53.763482   97598 main.go:141] libmachine: Making call to close driver server
I0403 18:25:53.763482   97598 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:53.763489   97598 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:53.763720   97598 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:53.763827   97598 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:53.763901   97598 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-138112 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:f13328 | 30.9MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:b6a454 | 26.3MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/kicbase/echo-server               | functional-138112  | sha256:9056ab | 2.37MB |
| docker.io/library/minikube-local-cache-test | functional-138112  | sha256:a6215c | 990B   |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:85b7a1 | 28.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:d8e673 | 20.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-138112 image ls --format table --alsologtostderr:
I0403 18:25:56.392751   97778 out.go:345] Setting OutFile to fd 1 ...
I0403 18:25:56.392866   97778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:56.392878   97778 out.go:358] Setting ErrFile to fd 2...
I0403 18:25:56.392884   97778 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:56.393066   97778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
I0403 18:25:56.393732   97778 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:56.393863   97778 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:56.394241   97778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:56.394311   97778 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:56.410401   97778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
I0403 18:25:56.411030   97778 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:56.411653   97778 main.go:141] libmachine: Using API Version  1
I0403 18:25:56.411685   97778 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:56.412096   97778 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:56.412286   97778 main.go:141] libmachine: (functional-138112) Calling .GetState
I0403 18:25:56.414242   97778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:56.414293   97778 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:56.430132   97778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
I0403 18:25:56.430698   97778 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:56.431212   97778 main.go:141] libmachine: Using API Version  1
I0403 18:25:56.431248   97778 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:56.431582   97778 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:56.431759   97778 main.go:141] libmachine: (functional-138112) Calling .DriverName
I0403 18:25:56.431977   97778 ssh_runner.go:195] Run: systemctl --version
I0403 18:25:56.432007   97778 main.go:141] libmachine: (functional-138112) Calling .GetSSHHostname
I0403 18:25:56.434977   97778 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:56.435411   97778 main.go:141] libmachine: (functional-138112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:8c", ip: ""} in network mk-functional-138112: {Iface:virbr1 ExpiryTime:2025-04-03 19:22:29 +0000 UTC Type:0 Mac:52:54:00:72:51:8c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-138112 Clientid:01:52:54:00:72:51:8c}
I0403 18:25:56.435455   97778 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined IP address 192.168.39.249 and MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:56.435568   97778 main.go:141] libmachine: (functional-138112) Calling .GetSSHPort
I0403 18:25:56.435708   97778 main.go:141] libmachine: (functional-138112) Calling .GetSSHKeyPath
I0403 18:25:56.435833   97778 main.go:141] libmachine: (functional-138112) Calling .GetSSHUsername
I0403 18:25:56.435985   97778 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/functional-138112/id_rsa Username:docker}
I0403 18:25:56.514512   97778 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:25:56.570935   97778 main.go:141] libmachine: Making call to close driver server
I0403 18:25:56.570963   97778 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:56.571233   97778 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:56.571302   97778 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:56.571316   97778 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:56.571329   97778 main.go:141] libmachine: Making call to close driver server
I0403 18:25:56.571338   97778 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:56.571562   97778 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:56.571585   97778 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-138112 image ls --format json --alsologtostderr:
[{"id":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"26259392"},{"id":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"20657902"},{"id":"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287
463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.32.2"],"size":"30907858"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-138112"],"size":"2372971"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:a6215c3a9c609f98c22daf351262a4d9974260aba199c744fda013307ba8e84f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-138112"],"size":"990"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":
["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:85b7a174738baecbc53029b7913cd4
30a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"28670731"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-138112 image ls --format json --alsologtostderr:
I0403 18:25:56.153899   97754 out.go:345] Setting OutFile to fd 1 ...
I0403 18:25:56.154185   97754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:56.154202   97754 out.go:358] Setting ErrFile to fd 2...
I0403 18:25:56.154207   97754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:56.154446   97754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
I0403 18:25:56.155035   97754 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:56.155175   97754 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:56.155688   97754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:56.155754   97754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:56.171812   97754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
I0403 18:25:56.172339   97754 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:56.172930   97754 main.go:141] libmachine: Using API Version  1
I0403 18:25:56.172960   97754 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:56.173370   97754 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:56.173556   97754 main.go:141] libmachine: (functional-138112) Calling .GetState
I0403 18:25:56.175860   97754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:56.175928   97754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:56.191297   97754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
I0403 18:25:56.191818   97754 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:56.192395   97754 main.go:141] libmachine: Using API Version  1
I0403 18:25:56.192422   97754 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:56.192765   97754 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:56.192975   97754 main.go:141] libmachine: (functional-138112) Calling .DriverName
I0403 18:25:56.193189   97754 ssh_runner.go:195] Run: systemctl --version
I0403 18:25:56.193222   97754 main.go:141] libmachine: (functional-138112) Calling .GetSSHHostname
I0403 18:25:56.196075   97754 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:56.196566   97754 main.go:141] libmachine: (functional-138112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:8c", ip: ""} in network mk-functional-138112: {Iface:virbr1 ExpiryTime:2025-04-03 19:22:29 +0000 UTC Type:0 Mac:52:54:00:72:51:8c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-138112 Clientid:01:52:54:00:72:51:8c}
I0403 18:25:56.196612   97754 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined IP address 192.168.39.249 and MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:56.196676   97754 main.go:141] libmachine: (functional-138112) Calling .GetSSHPort
I0403 18:25:56.196866   97754 main.go:141] libmachine: (functional-138112) Calling .GetSSHKeyPath
I0403 18:25:56.197031   97754 main.go:141] libmachine: (functional-138112) Calling .GetSSHUsername
I0403 18:25:56.197168   97754 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/functional-138112/id_rsa Username:docker}
I0403 18:25:56.282456   97754 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:25:56.342303   97754 main.go:141] libmachine: Making call to close driver server
I0403 18:25:56.342320   97754 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:56.342594   97754 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:56.342613   97754 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:56.342623   97754 main.go:141] libmachine: Making call to close driver server
I0403 18:25:56.342630   97754 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:56.342645   97754 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:56.342950   97754 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:56.342969   97754 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-138112 image ls --format yaml --alsologtostderr:
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-138112
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:a6215c3a9c609f98c22daf351262a4d9974260aba199c744fda013307ba8e84f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-138112
size: "990"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "20657902"
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "26259392"
- id: sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "30907858"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "28670731"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-138112 image ls --format yaml --alsologtostderr:
I0403 18:25:53.822463   97621 out.go:345] Setting OutFile to fd 1 ...
I0403 18:25:53.822553   97621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:53.822561   97621 out.go:358] Setting ErrFile to fd 2...
I0403 18:25:53.822565   97621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:53.822726   97621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
I0403 18:25:53.823271   97621 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:53.823363   97621 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:53.823767   97621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:53.823827   97621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:53.842187   97621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34611
I0403 18:25:53.842724   97621 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:53.843264   97621 main.go:141] libmachine: Using API Version  1
I0403 18:25:53.843292   97621 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:53.843720   97621 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:53.843900   97621 main.go:141] libmachine: (functional-138112) Calling .GetState
I0403 18:25:53.845833   97621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:53.845880   97621 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:53.862116   97621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
I0403 18:25:53.862590   97621 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:53.863086   97621 main.go:141] libmachine: Using API Version  1
I0403 18:25:53.863117   97621 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:53.863512   97621 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:53.863706   97621 main.go:141] libmachine: (functional-138112) Calling .DriverName
I0403 18:25:53.863919   97621 ssh_runner.go:195] Run: systemctl --version
I0403 18:25:53.863953   97621 main.go:141] libmachine: (functional-138112) Calling .GetSSHHostname
I0403 18:25:53.866694   97621 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:53.867151   97621 main.go:141] libmachine: (functional-138112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:8c", ip: ""} in network mk-functional-138112: {Iface:virbr1 ExpiryTime:2025-04-03 19:22:29 +0000 UTC Type:0 Mac:52:54:00:72:51:8c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-138112 Clientid:01:52:54:00:72:51:8c}
I0403 18:25:53.867200   97621 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined IP address 192.168.39.249 and MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:53.867283   97621 main.go:141] libmachine: (functional-138112) Calling .GetSSHPort
I0403 18:25:53.867486   97621 main.go:141] libmachine: (functional-138112) Calling .GetSSHKeyPath
I0403 18:25:53.867623   97621 main.go:141] libmachine: (functional-138112) Calling .GetSSHUsername
I0403 18:25:53.867762   97621 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/functional-138112/id_rsa Username:docker}
I0403 18:25:53.949864   97621 ssh_runner.go:195] Run: sudo crictl images --output json
I0403 18:25:53.986469   97621 main.go:141] libmachine: Making call to close driver server
I0403 18:25:53.986482   97621 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:53.986766   97621 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:53.986791   97621 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:53.986799   97621 main.go:141] libmachine: Making call to close driver server
I0403 18:25:53.986807   97621 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:53.987032   97621 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:53.987056   97621 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:53.987079   97621 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh pgrep buildkitd: exit status 1 (196.517803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image build -t localhost/my-image:functional-138112 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 image build -t localhost/my-image:functional-138112 testdata/build --alsologtostderr: (5.711542047s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-138112 image build -t localhost/my-image:functional-138112 testdata/build --alsologtostderr:
I0403 18:25:54.264515   97675 out.go:345] Setting OutFile to fd 1 ...
I0403 18:25:54.264827   97675 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:54.264839   97675 out.go:358] Setting ErrFile to fd 2...
I0403 18:25:54.264845   97675 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0403 18:25:54.265145   97675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
I0403 18:25:54.265955   97675 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:54.266527   97675 config.go:182] Loaded profile config "functional-138112": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0403 18:25:54.266857   97675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:54.266898   97675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:54.282267   97675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
I0403 18:25:54.282743   97675 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:54.283294   97675 main.go:141] libmachine: Using API Version  1
I0403 18:25:54.283322   97675 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:54.283736   97675 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:54.283936   97675 main.go:141] libmachine: (functional-138112) Calling .GetState
I0403 18:25:54.285811   97675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0403 18:25:54.285885   97675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0403 18:25:54.301046   97675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33687
I0403 18:25:54.301487   97675 main.go:141] libmachine: () Calling .GetVersion
I0403 18:25:54.301925   97675 main.go:141] libmachine: Using API Version  1
I0403 18:25:54.301949   97675 main.go:141] libmachine: () Calling .SetConfigRaw
I0403 18:25:54.302353   97675 main.go:141] libmachine: () Calling .GetMachineName
I0403 18:25:54.302563   97675 main.go:141] libmachine: (functional-138112) Calling .DriverName
I0403 18:25:54.302745   97675 ssh_runner.go:195] Run: systemctl --version
I0403 18:25:54.302780   97675 main.go:141] libmachine: (functional-138112) Calling .GetSSHHostname
I0403 18:25:54.305500   97675 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:54.305959   97675 main.go:141] libmachine: (functional-138112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:51:8c", ip: ""} in network mk-functional-138112: {Iface:virbr1 ExpiryTime:2025-04-03 19:22:29 +0000 UTC Type:0 Mac:52:54:00:72:51:8c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:functional-138112 Clientid:01:52:54:00:72:51:8c}
I0403 18:25:54.305993   97675 main.go:141] libmachine: (functional-138112) DBG | domain functional-138112 has defined IP address 192.168.39.249 and MAC address 52:54:00:72:51:8c in network mk-functional-138112
I0403 18:25:54.306129   97675 main.go:141] libmachine: (functional-138112) Calling .GetSSHPort
I0403 18:25:54.306263   97675 main.go:141] libmachine: (functional-138112) Calling .GetSSHKeyPath
I0403 18:25:54.306398   97675 main.go:141] libmachine: (functional-138112) Calling .GetSSHUsername
I0403 18:25:54.306528   97675 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/functional-138112/id_rsa Username:docker}
I0403 18:25:54.387305   97675 build_images.go:161] Building image from path: /tmp/build.251504952.tar
I0403 18:25:54.387381   97675 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0403 18:25:54.397793   97675 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.251504952.tar
I0403 18:25:54.402088   97675 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.251504952.tar: stat -c "%s %y" /var/lib/minikube/build/build.251504952.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.251504952.tar': No such file or directory
I0403 18:25:54.402125   97675 ssh_runner.go:362] scp /tmp/build.251504952.tar --> /var/lib/minikube/build/build.251504952.tar (3072 bytes)
I0403 18:25:54.432016   97675 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.251504952
I0403 18:25:54.442171   97675 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.251504952 -xf /var/lib/minikube/build/build.251504952.tar
I0403 18:25:54.452383   97675 containerd.go:394] Building image: /var/lib/minikube/build/build.251504952
I0403 18:25:54.452462   97675 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.251504952 --local dockerfile=/var/lib/minikube/build/build.251504952 --output type=image,name=localhost/my-image:functional-138112
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 3.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:da0016e14baffdc52df4ab25aa745313a04134edfafd8ecf8b41602e55171a27
#8 exporting manifest sha256:da0016e14baffdc52df4ab25aa745313a04134edfafd8ecf8b41602e55171a27 0.0s done
#8 exporting config sha256:8be63ded9cb765f27618a17828d0a443185f891d56b2c8faadc68e66e09de750 0.0s done
#8 naming to localhost/my-image:functional-138112 done
#8 DONE 0.2s
I0403 18:25:59.892023   97675 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.251504952 --local dockerfile=/var/lib/minikube/build/build.251504952 --output type=image,name=localhost/my-image:functional-138112: (5.439525806s)
I0403 18:25:59.892112   97675 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.251504952
I0403 18:25:59.904028   97675 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.251504952.tar
I0403 18:25:59.917057   97675 build_images.go:217] Built localhost/my-image:functional-138112 from /tmp/build.251504952.tar
I0403 18:25:59.917100   97675 build_images.go:133] succeeded building to: functional-138112
I0403 18:25:59.917107   97675 build_images.go:134] failed building to: 
I0403 18:25:59.917139   97675 main.go:141] libmachine: Making call to close driver server
I0403 18:25:59.917150   97675 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:59.917477   97675 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:59.917515   97675 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:59.917539   97675 main.go:141] libmachine: Making call to close connection to plugin binary
I0403 18:25:59.917548   97675 main.go:141] libmachine: Making call to close driver server
I0403 18:25:59.917553   97675 main.go:141] libmachine: (functional-138112) Calling .Close
I0403 18:25:59.917794   97675 main.go:141] libmachine: (functional-138112) DBG | Closing plugin on server side
I0403 18:25:59.917829   97675 main.go:141] libmachine: Successfully made call to close driver server
I0403 18:25:59.917851   97675 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
2025/04/03 18:26:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.668241643s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-138112
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image load --daemon kicbase/echo-server:functional-138112 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image load --daemon kicbase/echo-server:functional-138112 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 image load --daemon kicbase/echo-server:functional-138112 --alsologtostderr: (1.575249996s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Done: docker pull kicbase/echo-server:latest: (1.174638459s)
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-138112
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image load --daemon kicbase/echo-server:functional-138112 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 image load --daemon kicbase/echo-server:functional-138112 --alsologtostderr: (1.250329262s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image save kicbase/echo-server:functional-138112 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image rm kicbase/echo-server:functional-138112 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "275.980681ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "68.075184ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "374.647633ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "199.599166ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-138112
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 image save --daemon kicbase/echo-server:functional-138112 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-138112
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdany-port2459750674/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1743704739535518564" to /tmp/TestFunctionalparallelMountCmdany-port2459750674/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1743704739535518564" to /tmp/TestFunctionalparallelMountCmdany-port2459750674/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1743704739535518564" to /tmp/TestFunctionalparallelMountCmdany-port2459750674/001/test-1743704739535518564
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.929154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:25:39.735792   88051 retry.go:31] will retry after 629.069162ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  3 18:25 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  3 18:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  3 18:25 test-1743704739535518564
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh cat /mount-9p/test-1743704739535518564
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-138112 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [06e17c51-3533-4e72-8108-6f7ae97526c1] Pending
helpers_test.go:344: "busybox-mount" [06e17c51-3533-4e72-8108-6f7ae97526c1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [06e17c51-3533-4e72-8108-6f7ae97526c1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [06e17c51-3533-4e72-8108-6f7ae97526c1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003476836s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-138112 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdany-port2459750674/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service list
functional_test.go:1476: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 service list: (1.340666183s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdspecific-port3921911134/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.502905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:25:50.474946   88051 retry.go:31] will retry after 403.625922ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdspecific-port3921911134/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "sudo umount -f /mount-9p": exit status 1 (192.782343ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-138112 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdspecific-port3921911134/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-linux-amd64 -p functional-138112 service list -o json: (1.245554533s)
functional_test.go:1511: Took "1.245671032s" to run "out/minikube-linux-amd64 -p functional-138112 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T" /mount1: exit status 1 (267.894604ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0403 18:25:52.153093   88051 retry.go:31] will retry after 444.147258ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-138112 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-138112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1805819128/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.249:31158
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.249:31158
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-138112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-138112
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-138112
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-138112
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-723098 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0403 18:27:23.283655   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:27:50.995628   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-723098 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m14.998785325s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-723098 -- rollout status deployment/busybox: (6.272267919s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-5svdb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-lnzbn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-pg9hn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-5svdb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-lnzbn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-pg9hn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-5svdb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-lnzbn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-pg9hn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-5svdb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-5svdb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-lnzbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-lnzbn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-pg9hn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-723098 -- exec busybox-58667487b6-pg9hn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-723098 -v=7 --alsologtostderr
E0403 18:30:26.970232   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:26.976634   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:26.988059   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:27.009493   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:27.050980   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:27.132419   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:27.293890   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:27.615618   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:28.257219   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:29.539321   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:32.101304   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:30:37.222801   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-723098 -v=7 --alsologtostderr: (56.470122373s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-723098 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status --output json -v=7 --alsologtostderr
E0403 18:30:47.464947   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp testdata/cp-test.txt ha-723098:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3243183424/001/cp-test_ha-723098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098:/home/docker/cp-test.txt ha-723098-m02:/home/docker/cp-test_ha-723098_ha-723098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test_ha-723098_ha-723098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098:/home/docker/cp-test.txt ha-723098-m03:/home/docker/cp-test_ha-723098_ha-723098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test_ha-723098_ha-723098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098:/home/docker/cp-test.txt ha-723098-m04:/home/docker/cp-test_ha-723098_ha-723098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test_ha-723098_ha-723098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp testdata/cp-test.txt ha-723098-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3243183424/001/cp-test_ha-723098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m02:/home/docker/cp-test.txt ha-723098:/home/docker/cp-test_ha-723098-m02_ha-723098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test_ha-723098-m02_ha-723098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m02:/home/docker/cp-test.txt ha-723098-m03:/home/docker/cp-test_ha-723098-m02_ha-723098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test_ha-723098-m02_ha-723098-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m02:/home/docker/cp-test.txt ha-723098-m04:/home/docker/cp-test_ha-723098-m02_ha-723098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test_ha-723098-m02_ha-723098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp testdata/cp-test.txt ha-723098-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3243183424/001/cp-test_ha-723098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m03:/home/docker/cp-test.txt ha-723098:/home/docker/cp-test_ha-723098-m03_ha-723098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test_ha-723098-m03_ha-723098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m03:/home/docker/cp-test.txt ha-723098-m02:/home/docker/cp-test_ha-723098-m03_ha-723098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test_ha-723098-m03_ha-723098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m03:/home/docker/cp-test.txt ha-723098-m04:/home/docker/cp-test_ha-723098-m03_ha-723098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test_ha-723098-m03_ha-723098-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp testdata/cp-test.txt ha-723098-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3243183424/001/cp-test_ha-723098-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m04:/home/docker/cp-test.txt ha-723098:/home/docker/cp-test_ha-723098-m04_ha-723098.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098 "sudo cat /home/docker/cp-test_ha-723098-m04_ha-723098.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m04:/home/docker/cp-test.txt ha-723098-m02:/home/docker/cp-test_ha-723098-m04_ha-723098-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m02 "sudo cat /home/docker/cp-test_ha-723098-m04_ha-723098-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 cp ha-723098-m04:/home/docker/cp-test.txt ha-723098-m03:/home/docker/cp-test_ha-723098-m04_ha-723098-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 ssh -n ha-723098-m03 "sudo cat /home/docker/cp-test_ha-723098-m04_ha-723098-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 node stop m02 -v=7 --alsologtostderr
E0403 18:31:07.946389   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:31:48.908821   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:32:23.283582   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-723098 node stop m02 -v=7 --alsologtostderr: (1m30.79987129s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr: exit status 7 (675.875715ms)

                                                
                                                
-- stdout --
	ha-723098
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-723098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-723098-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-723098-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:32:30.794206  102604 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:32:30.794316  102604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:32:30.794328  102604 out.go:358] Setting ErrFile to fd 2...
	I0403 18:32:30.794333  102604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:32:30.794515  102604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:32:30.794672  102604 out.go:352] Setting JSON to false
	I0403 18:32:30.794702  102604 mustload.go:65] Loading cluster: ha-723098
	I0403 18:32:30.794762  102604 notify.go:220] Checking for updates...
	I0403 18:32:30.795060  102604 config.go:182] Loaded profile config "ha-723098": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 18:32:30.795079  102604 status.go:174] checking status of ha-723098 ...
	I0403 18:32:30.795648  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:30.795707  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:30.812985  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37547
	I0403 18:32:30.813426  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:30.813922  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:30.813947  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:30.814378  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:30.814590  102604 main.go:141] libmachine: (ha-723098) Calling .GetState
	I0403 18:32:30.816105  102604 status.go:371] ha-723098 host status = "Running" (err=<nil>)
	I0403 18:32:30.816127  102604 host.go:66] Checking if "ha-723098" exists ...
	I0403 18:32:30.816555  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:30.816629  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:30.832010  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43471
	I0403 18:32:30.832505  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:30.833082  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:30.833111  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:30.833523  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:30.833688  102604 main.go:141] libmachine: (ha-723098) Calling .GetIP
	I0403 18:32:30.836514  102604 main.go:141] libmachine: (ha-723098) DBG | domain ha-723098 has defined MAC address 52:54:00:81:c8:dc in network mk-ha-723098
	I0403 18:32:30.836983  102604 main.go:141] libmachine: (ha-723098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:c8:dc", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:26:38 +0000 UTC Type:0 Mac:52:54:00:81:c8:dc Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-723098 Clientid:01:52:54:00:81:c8:dc}
	I0403 18:32:30.837010  102604 main.go:141] libmachine: (ha-723098) DBG | domain ha-723098 has defined IP address 192.168.39.217 and MAC address 52:54:00:81:c8:dc in network mk-ha-723098
	I0403 18:32:30.837128  102604 host.go:66] Checking if "ha-723098" exists ...
	I0403 18:32:30.837427  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:30.837470  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:30.852840  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I0403 18:32:30.853269  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:30.853683  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:30.853706  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:30.854065  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:30.854264  102604 main.go:141] libmachine: (ha-723098) Calling .DriverName
	I0403 18:32:30.854436  102604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:32:30.854464  102604 main.go:141] libmachine: (ha-723098) Calling .GetSSHHostname
	I0403 18:32:30.857280  102604 main.go:141] libmachine: (ha-723098) DBG | domain ha-723098 has defined MAC address 52:54:00:81:c8:dc in network mk-ha-723098
	I0403 18:32:30.857678  102604 main.go:141] libmachine: (ha-723098) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:c8:dc", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:26:38 +0000 UTC Type:0 Mac:52:54:00:81:c8:dc Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-723098 Clientid:01:52:54:00:81:c8:dc}
	I0403 18:32:30.857707  102604 main.go:141] libmachine: (ha-723098) DBG | domain ha-723098 has defined IP address 192.168.39.217 and MAC address 52:54:00:81:c8:dc in network mk-ha-723098
	I0403 18:32:30.857864  102604 main.go:141] libmachine: (ha-723098) Calling .GetSSHPort
	I0403 18:32:30.858028  102604 main.go:141] libmachine: (ha-723098) Calling .GetSSHKeyPath
	I0403 18:32:30.858174  102604 main.go:141] libmachine: (ha-723098) Calling .GetSSHUsername
	I0403 18:32:30.858313  102604 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/ha-723098/id_rsa Username:docker}
	I0403 18:32:30.949293  102604 ssh_runner.go:195] Run: systemctl --version
	I0403 18:32:30.956954  102604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:32:30.981429  102604 kubeconfig.go:125] found "ha-723098" server: "https://192.168.39.254:8443"
	I0403 18:32:30.981464  102604 api_server.go:166] Checking apiserver status ...
	I0403 18:32:30.981500  102604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:32:31.001846  102604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0403 18:32:31.013086  102604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:32:31.013144  102604 ssh_runner.go:195] Run: ls
	I0403 18:32:31.017246  102604 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0403 18:32:31.021627  102604 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0403 18:32:31.021647  102604 status.go:463] ha-723098 apiserver status = Running (err=<nil>)
	I0403 18:32:31.021656  102604 status.go:176] ha-723098 status: &{Name:ha-723098 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:32:31.021678  102604 status.go:174] checking status of ha-723098-m02 ...
	I0403 18:32:31.022003  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.022034  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.037049  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45935
	I0403 18:32:31.037425  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.037890  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.037918  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.038294  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.038477  102604 main.go:141] libmachine: (ha-723098-m02) Calling .GetState
	I0403 18:32:31.039893  102604 status.go:371] ha-723098-m02 host status = "Stopped" (err=<nil>)
	I0403 18:32:31.039922  102604 status.go:384] host is not running, skipping remaining checks
	I0403 18:32:31.039934  102604 status.go:176] ha-723098-m02 status: &{Name:ha-723098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:32:31.039964  102604 status.go:174] checking status of ha-723098-m03 ...
	I0403 18:32:31.040263  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.040306  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.055172  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0403 18:32:31.055598  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.056001  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.056018  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.056347  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.056509  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetState
	I0403 18:32:31.058018  102604 status.go:371] ha-723098-m03 host status = "Running" (err=<nil>)
	I0403 18:32:31.058035  102604 host.go:66] Checking if "ha-723098-m03" exists ...
	I0403 18:32:31.058425  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.058451  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.073456  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43545
	I0403 18:32:31.073913  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.074396  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.074421  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.074720  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.074970  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetIP
	I0403 18:32:31.077679  102604 main.go:141] libmachine: (ha-723098-m03) DBG | domain ha-723098-m03 has defined MAC address 52:54:00:0d:bd:f6 in network mk-ha-723098
	I0403 18:32:31.078202  102604 main.go:141] libmachine: (ha-723098-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bd:f6", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:28:41 +0000 UTC Type:0 Mac:52:54:00:0d:bd:f6 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-723098-m03 Clientid:01:52:54:00:0d:bd:f6}
	I0403 18:32:31.078228  102604 main.go:141] libmachine: (ha-723098-m03) DBG | domain ha-723098-m03 has defined IP address 192.168.39.130 and MAC address 52:54:00:0d:bd:f6 in network mk-ha-723098
	I0403 18:32:31.078297  102604 host.go:66] Checking if "ha-723098-m03" exists ...
	I0403 18:32:31.078637  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.078679  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.094435  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0403 18:32:31.094833  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.095313  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.095338  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.095701  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.095873  102604 main.go:141] libmachine: (ha-723098-m03) Calling .DriverName
	I0403 18:32:31.096048  102604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:32:31.096068  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetSSHHostname
	I0403 18:32:31.099006  102604 main.go:141] libmachine: (ha-723098-m03) DBG | domain ha-723098-m03 has defined MAC address 52:54:00:0d:bd:f6 in network mk-ha-723098
	I0403 18:32:31.099443  102604 main.go:141] libmachine: (ha-723098-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:bd:f6", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:28:41 +0000 UTC Type:0 Mac:52:54:00:0d:bd:f6 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:ha-723098-m03 Clientid:01:52:54:00:0d:bd:f6}
	I0403 18:32:31.099487  102604 main.go:141] libmachine: (ha-723098-m03) DBG | domain ha-723098-m03 has defined IP address 192.168.39.130 and MAC address 52:54:00:0d:bd:f6 in network mk-ha-723098
	I0403 18:32:31.099600  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetSSHPort
	I0403 18:32:31.099817  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetSSHKeyPath
	I0403 18:32:31.099957  102604 main.go:141] libmachine: (ha-723098-m03) Calling .GetSSHUsername
	I0403 18:32:31.100147  102604 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/ha-723098-m03/id_rsa Username:docker}
	I0403 18:32:31.185762  102604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:32:31.205967  102604 kubeconfig.go:125] found "ha-723098" server: "https://192.168.39.254:8443"
	I0403 18:32:31.206004  102604 api_server.go:166] Checking apiserver status ...
	I0403 18:32:31.206058  102604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:32:31.227313  102604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup
	W0403 18:32:31.239244  102604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:32:31.239296  102604 ssh_runner.go:195] Run: ls
	I0403 18:32:31.244470  102604 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0403 18:32:31.250948  102604 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0403 18:32:31.250976  102604 status.go:463] ha-723098-m03 apiserver status = Running (err=<nil>)
	I0403 18:32:31.250989  102604 status.go:176] ha-723098-m03 status: &{Name:ha-723098-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:32:31.251012  102604 status.go:174] checking status of ha-723098-m04 ...
	I0403 18:32:31.251457  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.251506  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.267126  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0403 18:32:31.267663  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.268111  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.268134  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.268446  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.268600  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetState
	I0403 18:32:31.270460  102604 status.go:371] ha-723098-m04 host status = "Running" (err=<nil>)
	I0403 18:32:31.270474  102604 host.go:66] Checking if "ha-723098-m04" exists ...
	I0403 18:32:31.270870  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.270930  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.287130  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0403 18:32:31.287616  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.288114  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.288135  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.288482  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.288676  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetIP
	I0403 18:32:31.291458  102604 main.go:141] libmachine: (ha-723098-m04) DBG | domain ha-723098-m04 has defined MAC address 52:54:00:60:8f:84 in network mk-ha-723098
	I0403 18:32:31.292031  102604 main.go:141] libmachine: (ha-723098-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:8f:84", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:30:04 +0000 UTC Type:0 Mac:52:54:00:60:8f:84 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-723098-m04 Clientid:01:52:54:00:60:8f:84}
	I0403 18:32:31.292063  102604 main.go:141] libmachine: (ha-723098-m04) DBG | domain ha-723098-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:60:8f:84 in network mk-ha-723098
	I0403 18:32:31.292257  102604 host.go:66] Checking if "ha-723098-m04" exists ...
	I0403 18:32:31.292572  102604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:32:31.292625  102604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:32:31.309055  102604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45575
	I0403 18:32:31.309601  102604 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:32:31.310084  102604 main.go:141] libmachine: Using API Version  1
	I0403 18:32:31.310107  102604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:32:31.310441  102604 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:32:31.310597  102604 main.go:141] libmachine: (ha-723098-m04) Calling .DriverName
	I0403 18:32:31.310756  102604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:32:31.310775  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetSSHHostname
	I0403 18:32:31.313991  102604 main.go:141] libmachine: (ha-723098-m04) DBG | domain ha-723098-m04 has defined MAC address 52:54:00:60:8f:84 in network mk-ha-723098
	I0403 18:32:31.314407  102604 main.go:141] libmachine: (ha-723098-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:8f:84", ip: ""} in network mk-ha-723098: {Iface:virbr1 ExpiryTime:2025-04-03 19:30:04 +0000 UTC Type:0 Mac:52:54:00:60:8f:84 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-723098-m04 Clientid:01:52:54:00:60:8f:84}
	I0403 18:32:31.314429  102604 main.go:141] libmachine: (ha-723098-m04) DBG | domain ha-723098-m04 has defined IP address 192.168.39.225 and MAC address 52:54:00:60:8f:84 in network mk-ha-723098
	I0403 18:32:31.314591  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetSSHPort
	I0403 18:32:31.314760  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetSSHKeyPath
	I0403 18:32:31.314947  102604 main.go:141] libmachine: (ha-723098-m04) Calling .GetSSHUsername
	I0403 18:32:31.315107  102604 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/ha-723098-m04/id_rsa Username:docker}
	I0403 18:32:31.404593  102604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:32:31.421735  102604 status.go:176] ha-723098-m04 status: &{Name:ha-723098-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 node start m02 -v=7 --alsologtostderr
E0403 18:33:10.831044   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-723098 node start m02 -v=7 --alsologtostderr: (41.848997243s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (483.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-723098 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-723098 -v=7 --alsologtostderr
E0403 18:35:26.970090   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:35:54.672499   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:37:23.283458   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-723098 -v=7 --alsologtostderr: (4m34.197195569s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-723098 --wait=true -v=7 --alsologtostderr
E0403 18:38:46.357463   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:40:26.970224   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-723098 --wait=true -v=7 --alsologtostderr: (3m29.038440188s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-723098
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (483.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-723098 node delete m03 -v=7 --alsologtostderr: (6.132347153s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (183.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 stop -v=7 --alsologtostderr
E0403 18:42:23.283955   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-723098 stop -v=7 --alsologtostderr: (3m3.018182141s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr: exit status 7 (107.735351ms)

                                                
                                                
-- stdout --
	ha-723098
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-723098-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-723098-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:44:29.681740  106316 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:44:29.681840  106316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:44:29.681845  106316 out.go:358] Setting ErrFile to fd 2...
	I0403 18:44:29.681857  106316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:44:29.682046  106316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:44:29.682201  106316 out.go:352] Setting JSON to false
	I0403 18:44:29.682231  106316 mustload.go:65] Loading cluster: ha-723098
	I0403 18:44:29.682377  106316 notify.go:220] Checking for updates...
	I0403 18:44:29.682673  106316 config.go:182] Loaded profile config "ha-723098": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 18:44:29.682699  106316 status.go:174] checking status of ha-723098 ...
	I0403 18:44:29.683237  106316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:44:29.683322  106316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:44:29.702401  106316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
	I0403 18:44:29.702873  106316 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:44:29.703548  106316 main.go:141] libmachine: Using API Version  1
	I0403 18:44:29.703585  106316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:44:29.703963  106316 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:44:29.704151  106316 main.go:141] libmachine: (ha-723098) Calling .GetState
	I0403 18:44:29.705930  106316 status.go:371] ha-723098 host status = "Stopped" (err=<nil>)
	I0403 18:44:29.705943  106316 status.go:384] host is not running, skipping remaining checks
	I0403 18:44:29.705948  106316 status.go:176] ha-723098 status: &{Name:ha-723098 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:44:29.705973  106316 status.go:174] checking status of ha-723098-m02 ...
	I0403 18:44:29.706298  106316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:44:29.706349  106316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:44:29.721457  106316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0403 18:44:29.721838  106316 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:44:29.722293  106316 main.go:141] libmachine: Using API Version  1
	I0403 18:44:29.722317  106316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:44:29.722654  106316 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:44:29.722810  106316 main.go:141] libmachine: (ha-723098-m02) Calling .GetState
	I0403 18:44:29.724234  106316 status.go:371] ha-723098-m02 host status = "Stopped" (err=<nil>)
	I0403 18:44:29.724244  106316 status.go:384] host is not running, skipping remaining checks
	I0403 18:44:29.724250  106316 status.go:176] ha-723098-m02 status: &{Name:ha-723098-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:44:29.724264  106316 status.go:174] checking status of ha-723098-m04 ...
	I0403 18:44:29.724529  106316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:44:29.724568  106316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:44:29.739008  106316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I0403 18:44:29.739417  106316 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:44:29.739854  106316 main.go:141] libmachine: Using API Version  1
	I0403 18:44:29.739883  106316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:44:29.740224  106316 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:44:29.740398  106316 main.go:141] libmachine: (ha-723098-m04) Calling .GetState
	I0403 18:44:29.742024  106316 status.go:371] ha-723098-m04 host status = "Stopped" (err=<nil>)
	I0403 18:44:29.742037  106316 status.go:384] host is not running, skipping remaining checks
	I0403 18:44:29.742042  106316 status.go:176] ha-723098-m04 status: &{Name:ha-723098-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (183.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-723098 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0403 18:45:26.969425   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-723098 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m1.102293194s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-723098 --control-plane -v=7 --alsologtostderr
E0403 18:46:50.036590   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:47:23.284132   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-723098 --control-plane -v=7 --alsologtostderr: (1m12.559171984s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-723098 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-944939 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-944939 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (56.817461133s)
--- PASS: TestJSONOutput/start/Command (56.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-944939 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-944939 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.56s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-944939 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-944939 --output=json --user=testUser: (6.556821412s)
--- PASS: TestJSONOutput/stop/Command (6.56s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-598578 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-598578 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.342614ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d39a503d-ed31-436e-8a29-b5ff7fa65d1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-598578] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0647ee3-459d-4554-849c-b002c1e26586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20591"}}
	{"specversion":"1.0","id":"075a05fd-a9de-491d-8a58-dc768e4c82ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66934708-a909-444b-b2d4-64f3c07f7454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig"}}
	{"specversion":"1.0","id":"0a4c12f2-10f6-4d13-88e8-c2f1da2dc53c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube"}}
	{"specversion":"1.0","id":"57f6c799-d454-4394-9948-1528c5030cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"659d60ce-4987-4d2c-b33d-9a97b45da603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a040a5ec-7043-47dd-8714-22901994832c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-598578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-598578
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-687172 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-687172 --driver=kvm2  --container-runtime=containerd: (44.852364235s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-702318 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-702318 --driver=kvm2  --container-runtime=containerd: (46.734440892s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-687172
E0403 18:50:26.970232   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-702318
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-702318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-702318
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-702318: (1.007329454s)
helpers_test.go:175: Cleaning up "first-687172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-687172
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-687172: (1.009930497s)
--- PASS: TestMinikubeProfile (94.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-780453 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-780453 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.145715163s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780453 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-780453 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-797988 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-797988 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.724327643s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-780453 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-797988
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-797988: (1.289394376s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-797988
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-797988: (25.196678728s)
--- PASS: TestMountStart/serial/RestartStopped (26.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-797988 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-406369 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0403 18:52:23.283683   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-406369 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m52.830503242s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-406369 -- rollout status deployment/busybox: (5.522981474s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-plzb5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-w8p5b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-plzb5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-w8p5b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-plzb5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-w8p5b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-plzb5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-plzb5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-w8p5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-406369 -- exec busybox-58667487b6-w8p5b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-406369 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-406369 -v 3 --alsologtostderr: (51.26940935s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.84s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-406369 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp testdata/cp-test.txt multinode-406369:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4126611984/001/cp-test_multinode-406369.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369:/home/docker/cp-test.txt multinode-406369-m02:/home/docker/cp-test_multinode-406369_multinode-406369-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test_multinode-406369_multinode-406369-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369:/home/docker/cp-test.txt multinode-406369-m03:/home/docker/cp-test_multinode-406369_multinode-406369-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test_multinode-406369_multinode-406369-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp testdata/cp-test.txt multinode-406369-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4126611984/001/cp-test_multinode-406369-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m02:/home/docker/cp-test.txt multinode-406369:/home/docker/cp-test_multinode-406369-m02_multinode-406369.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test_multinode-406369-m02_multinode-406369.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m02:/home/docker/cp-test.txt multinode-406369-m03:/home/docker/cp-test_multinode-406369-m02_multinode-406369-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test_multinode-406369-m02_multinode-406369-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp testdata/cp-test.txt multinode-406369-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4126611984/001/cp-test_multinode-406369-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m03:/home/docker/cp-test.txt multinode-406369:/home/docker/cp-test_multinode-406369-m03_multinode-406369.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369 "sudo cat /home/docker/cp-test_multinode-406369-m03_multinode-406369.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 cp multinode-406369-m03:/home/docker/cp-test.txt multinode-406369-m02:/home/docker/cp-test_multinode-406369-m03_multinode-406369-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 ssh -n multinode-406369-m02 "sudo cat /home/docker/cp-test_multinode-406369-m03_multinode-406369-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-406369 node stop m03: (1.435978437s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-406369 status: exit status 7 (422.228825ms)

                                                
                                                
-- stdout --
	multinode-406369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-406369-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-406369-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr: exit status 7 (431.012997ms)

                                                
                                                
-- stdout --
	multinode-406369
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-406369-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-406369-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 18:55:04.336428  113940 out.go:345] Setting OutFile to fd 1 ...
	I0403 18:55:04.336661  113940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:55:04.336670  113940 out.go:358] Setting ErrFile to fd 2...
	I0403 18:55:04.336674  113940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 18:55:04.336853  113940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 18:55:04.337052  113940 out.go:352] Setting JSON to false
	I0403 18:55:04.337095  113940 mustload.go:65] Loading cluster: multinode-406369
	I0403 18:55:04.337225  113940 notify.go:220] Checking for updates...
	I0403 18:55:04.337992  113940 config.go:182] Loaded profile config "multinode-406369": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 18:55:04.338043  113940 status.go:174] checking status of multinode-406369 ...
	I0403 18:55:04.339418  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.339495  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.360778  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0403 18:55:04.361241  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.361898  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.361935  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.362327  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.362528  113940 main.go:141] libmachine: (multinode-406369) Calling .GetState
	I0403 18:55:04.364152  113940 status.go:371] multinode-406369 host status = "Running" (err=<nil>)
	I0403 18:55:04.364168  113940 host.go:66] Checking if "multinode-406369" exists ...
	I0403 18:55:04.364468  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.364526  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.379972  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0403 18:55:04.380354  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.380762  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.380783  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.381126  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.381285  113940 main.go:141] libmachine: (multinode-406369) Calling .GetIP
	I0403 18:55:04.383879  113940 main.go:141] libmachine: (multinode-406369) DBG | domain multinode-406369 has defined MAC address 52:54:00:ee:c0:47 in network mk-multinode-406369
	I0403 18:55:04.384286  113940 main.go:141] libmachine: (multinode-406369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c0:47", ip: ""} in network mk-multinode-406369: {Iface:virbr1 ExpiryTime:2025-04-03 19:52:16 +0000 UTC Type:0 Mac:52:54:00:ee:c0:47 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-406369 Clientid:01:52:54:00:ee:c0:47}
	I0403 18:55:04.384312  113940 main.go:141] libmachine: (multinode-406369) DBG | domain multinode-406369 has defined IP address 192.168.39.129 and MAC address 52:54:00:ee:c0:47 in network mk-multinode-406369
	I0403 18:55:04.384439  113940 host.go:66] Checking if "multinode-406369" exists ...
	I0403 18:55:04.384903  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.384944  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.400084  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
	I0403 18:55:04.400571  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.401057  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.401078  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.401385  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.401547  113940 main.go:141] libmachine: (multinode-406369) Calling .DriverName
	I0403 18:55:04.401715  113940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:55:04.401746  113940 main.go:141] libmachine: (multinode-406369) Calling .GetSSHHostname
	I0403 18:55:04.404236  113940 main.go:141] libmachine: (multinode-406369) DBG | domain multinode-406369 has defined MAC address 52:54:00:ee:c0:47 in network mk-multinode-406369
	I0403 18:55:04.404669  113940 main.go:141] libmachine: (multinode-406369) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c0:47", ip: ""} in network mk-multinode-406369: {Iface:virbr1 ExpiryTime:2025-04-03 19:52:16 +0000 UTC Type:0 Mac:52:54:00:ee:c0:47 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-406369 Clientid:01:52:54:00:ee:c0:47}
	I0403 18:55:04.404697  113940 main.go:141] libmachine: (multinode-406369) DBG | domain multinode-406369 has defined IP address 192.168.39.129 and MAC address 52:54:00:ee:c0:47 in network mk-multinode-406369
	I0403 18:55:04.404802  113940 main.go:141] libmachine: (multinode-406369) Calling .GetSSHPort
	I0403 18:55:04.404996  113940 main.go:141] libmachine: (multinode-406369) Calling .GetSSHKeyPath
	I0403 18:55:04.405148  113940 main.go:141] libmachine: (multinode-406369) Calling .GetSSHUsername
	I0403 18:55:04.405298  113940 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/multinode-406369/id_rsa Username:docker}
	I0403 18:55:04.492810  113940 ssh_runner.go:195] Run: systemctl --version
	I0403 18:55:04.501170  113940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:55:04.517839  113940 kubeconfig.go:125] found "multinode-406369" server: "https://192.168.39.129:8443"
	I0403 18:55:04.517894  113940 api_server.go:166] Checking apiserver status ...
	I0403 18:55:04.517927  113940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0403 18:55:04.530823  113940 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup
	W0403 18:55:04.540230  113940 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1136/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0403 18:55:04.540282  113940 ssh_runner.go:195] Run: ls
	I0403 18:55:04.544534  113940 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0403 18:55:04.549235  113940 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0403 18:55:04.549256  113940 status.go:463] multinode-406369 apiserver status = Running (err=<nil>)
	I0403 18:55:04.549270  113940 status.go:176] multinode-406369 status: &{Name:multinode-406369 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:55:04.549290  113940 status.go:174] checking status of multinode-406369-m02 ...
	I0403 18:55:04.549682  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.549729  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.565290  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
	I0403 18:55:04.565714  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.566258  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.566286  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.566629  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.566826  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetState
	I0403 18:55:04.568238  113940 status.go:371] multinode-406369-m02 host status = "Running" (err=<nil>)
	I0403 18:55:04.568256  113940 host.go:66] Checking if "multinode-406369-m02" exists ...
	I0403 18:55:04.568548  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.568584  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.583243  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0403 18:55:04.583751  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.584302  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.584328  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.584650  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.584919  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetIP
	I0403 18:55:04.587815  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | domain multinode-406369-m02 has defined MAC address 52:54:00:d9:b3:df in network mk-multinode-406369
	I0403 18:55:04.588256  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b3:df", ip: ""} in network mk-multinode-406369: {Iface:virbr1 ExpiryTime:2025-04-03 19:53:19 +0000 UTC Type:0 Mac:52:54:00:d9:b3:df Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-406369-m02 Clientid:01:52:54:00:d9:b3:df}
	I0403 18:55:04.588294  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | domain multinode-406369-m02 has defined IP address 192.168.39.73 and MAC address 52:54:00:d9:b3:df in network mk-multinode-406369
	I0403 18:55:04.588336  113940 host.go:66] Checking if "multinode-406369-m02" exists ...
	I0403 18:55:04.588657  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.588700  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.604355  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0403 18:55:04.604835  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.605270  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.605290  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.605623  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.605802  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .DriverName
	I0403 18:55:04.606028  113940 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0403 18:55:04.606048  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetSSHHostname
	I0403 18:55:04.608862  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | domain multinode-406369-m02 has defined MAC address 52:54:00:d9:b3:df in network mk-multinode-406369
	I0403 18:55:04.609270  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:b3:df", ip: ""} in network mk-multinode-406369: {Iface:virbr1 ExpiryTime:2025-04-03 19:53:19 +0000 UTC Type:0 Mac:52:54:00:d9:b3:df Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-406369-m02 Clientid:01:52:54:00:d9:b3:df}
	I0403 18:55:04.609303  113940 main.go:141] libmachine: (multinode-406369-m02) DBG | domain multinode-406369-m02 has defined IP address 192.168.39.73 and MAC address 52:54:00:d9:b3:df in network mk-multinode-406369
	I0403 18:55:04.609448  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetSSHPort
	I0403 18:55:04.609630  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetSSHKeyPath
	I0403 18:55:04.609781  113940 main.go:141] libmachine: (multinode-406369-m02) Calling .GetSSHUsername
	I0403 18:55:04.609896  113940 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20591-80797/.minikube/machines/multinode-406369-m02/id_rsa Username:docker}
	I0403 18:55:04.687056  113940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0403 18:55:04.701171  113940 status.go:176] multinode-406369-m02 status: &{Name:multinode-406369-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0403 18:55:04.701212  113940 status.go:174] checking status of multinode-406369-m03 ...
	I0403 18:55:04.701541  113940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 18:55:04.701595  113940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 18:55:04.717817  113940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45263
	I0403 18:55:04.718301  113940 main.go:141] libmachine: () Calling .GetVersion
	I0403 18:55:04.718750  113940 main.go:141] libmachine: Using API Version  1
	I0403 18:55:04.718775  113940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 18:55:04.719109  113940 main.go:141] libmachine: () Calling .GetMachineName
	I0403 18:55:04.719308  113940 main.go:141] libmachine: (multinode-406369-m03) Calling .GetState
	I0403 18:55:04.720791  113940 status.go:371] multinode-406369-m03 host status = "Stopped" (err=<nil>)
	I0403 18:55:04.720809  113940 status.go:384] host is not running, skipping remaining checks
	I0403 18:55:04.720816  113940 status.go:176] multinode-406369-m03 status: &{Name:multinode-406369-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 node start m03 -v=7 --alsologtostderr
E0403 18:55:26.358829   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 18:55:26.969686   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-406369 node start m03 -v=7 --alsologtostderr: (38.472128071s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (314.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-406369
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-406369
E0403 18:57:23.289264   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-406369: (3m2.768437899s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-406369 --wait=true -v=8 --alsologtostderr
E0403 19:00:26.970047   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-406369 --wait=true -v=8 --alsologtostderr: (2m11.923993593s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-406369
--- PASS: TestMultiNode/serial/RestartKeepsNodes (314.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-406369 node delete m03: (1.656463466s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 stop
E0403 19:02:23.285535   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:03:30.040459   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-406369 stop: (3m1.694592308s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-406369 status: exit status 7 (85.063564ms)

                                                
                                                
-- stdout --
	multinode-406369
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-406369-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr: exit status 7 (86.638448ms)

                                                
                                                
-- stdout --
	multinode-406369
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-406369-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:04:02.604474  116645 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:04:02.604578  116645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:04:02.604587  116645 out.go:358] Setting ErrFile to fd 2...
	I0403 19:04:02.604592  116645 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:04:02.604770  116645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 19:04:02.604939  116645 out.go:352] Setting JSON to false
	I0403 19:04:02.604970  116645 mustload.go:65] Loading cluster: multinode-406369
	I0403 19:04:02.605079  116645 notify.go:220] Checking for updates...
	I0403 19:04:02.605409  116645 config.go:182] Loaded profile config "multinode-406369": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0403 19:04:02.605433  116645 status.go:174] checking status of multinode-406369 ...
	I0403 19:04:02.605967  116645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 19:04:02.606023  116645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:04:02.621179  116645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
	I0403 19:04:02.621658  116645 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:04:02.622267  116645 main.go:141] libmachine: Using API Version  1
	I0403 19:04:02.622301  116645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:04:02.622642  116645 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:04:02.622802  116645 main.go:141] libmachine: (multinode-406369) Calling .GetState
	I0403 19:04:02.624229  116645 status.go:371] multinode-406369 host status = "Stopped" (err=<nil>)
	I0403 19:04:02.624249  116645 status.go:384] host is not running, skipping remaining checks
	I0403 19:04:02.624255  116645 status.go:176] multinode-406369 status: &{Name:multinode-406369 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0403 19:04:02.624272  116645 status.go:174] checking status of multinode-406369-m02 ...
	I0403 19:04:02.624583  116645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0403 19:04:02.624619  116645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0403 19:04:02.639437  116645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0403 19:04:02.639820  116645 main.go:141] libmachine: () Calling .GetVersion
	I0403 19:04:02.640271  116645 main.go:141] libmachine: Using API Version  1
	I0403 19:04:02.640292  116645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0403 19:04:02.640631  116645 main.go:141] libmachine: () Calling .GetMachineName
	I0403 19:04:02.640801  116645 main.go:141] libmachine: (multinode-406369-m02) Calling .GetState
	I0403 19:04:02.642301  116645 status.go:371] multinode-406369-m02 host status = "Stopped" (err=<nil>)
	I0403 19:04:02.642314  116645 status.go:384] host is not running, skipping remaining checks
	I0403 19:04:02.642320  116645 status.go:176] multinode-406369-m02 status: &{Name:multinode-406369-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (93.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-406369 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0403 19:05:26.969450   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-406369 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m32.6771005s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-406369 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (93.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-406369
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-406369-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-406369-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (60.834946ms)

                                                
                                                
-- stdout --
	* [multinode-406369-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-406369-m02' is duplicated with machine name 'multinode-406369-m02' in profile 'multinode-406369'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-406369-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-406369-m03 --driver=kvm2  --container-runtime=containerd: (44.601397773s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-406369
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-406369: exit status 80 (224.024001ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-406369 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-406369-m03 already exists in multinode-406369-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-406369-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-406369-m03: (1.023224916s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.96s)

                                                
                                    
x
+
TestPreload (250.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-404280 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0403 19:07:23.283589   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-404280 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.967214618s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-404280 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-404280 image pull gcr.io/k8s-minikube/busybox: (4.640955295s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-404280
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-404280: (1m30.959190402s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-404280 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0403 19:10:26.969509   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-404280 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m5.249051298s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-404280 image list
helpers_test.go:175: Cleaning up "test-preload-404280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-404280
--- PASS: TestPreload (250.90s)

                                                
                                    
x
+
TestScheduledStopUnix (118.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-833771 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-833771 --memory=2048 --driver=kvm2  --container-runtime=containerd: (47.368574653s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833771 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-833771 -n scheduled-stop-833771
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0403 19:11:21.973239   88051 retry.go:31] will retry after 110.538µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.974377   88051 retry.go:31] will retry after 148.756µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.975537   88051 retry.go:31] will retry after 214.625µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.976700   88051 retry.go:31] will retry after 492.822µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.977863   88051 retry.go:31] will retry after 315.616µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.979028   88051 retry.go:31] will retry after 997.112µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.980157   88051 retry.go:31] will retry after 751.24µs: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.981304   88051 retry.go:31] will retry after 1.359096ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.983533   88051 retry.go:31] will retry after 3.02241ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.986737   88051 retry.go:31] will retry after 2.085139ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.988881   88051 retry.go:31] will retry after 5.757837ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:21.995076   88051 retry.go:31] will retry after 5.459401ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:22.001330   88051 retry.go:31] will retry after 16.296571ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:22.018551   88051 retry.go:31] will retry after 25.835202ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
I0403 19:11:22.044817   88051 retry.go:31] will retry after 43.126708ms: open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/scheduled-stop-833771/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833771 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833771 -n scheduled-stop-833771
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-833771
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833771 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0403 19:12:06.363144   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:12:23.288915   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-833771
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-833771: exit status 7 (65.924411ms)

                                                
                                                
-- stdout --
	scheduled-stop-833771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833771 -n scheduled-stop-833771
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833771 -n scheduled-stop-833771: exit status 7 (64.743842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-833771" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-833771
--- PASS: TestScheduledStopUnix (118.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.67s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2641604532 start -p running-upgrade-922230 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2641604532 start -p running-upgrade-922230 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m6.652692488s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-922230 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-922230 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m10.856753421s)
helpers_test.go:175: Cleaning up "running-upgrade-922230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-922230
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-922230: (1.160266809s)
--- PASS: TestRunningBinaryUpgrade (203.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (134.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m3.867094618s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-681807
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-681807: (1.568141775s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-681807 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-681807 status --format={{.Host}}: exit status 7 (68.612292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0403 19:25:06.926445   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:06.932839   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:06.944237   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:06.965595   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:07.007207   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:07.088660   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:07.250185   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:07.571885   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:08.213998   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:09.496470   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:12.057787   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:17.179411   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:26.969924   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (37.140197376s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-681807 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (91.899998ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-681807] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-681807
	    minikube start -p kubernetes-upgrade-681807 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6818072 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-681807 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0403 19:25:27.420738   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:25:47.902436   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-681807 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.018132524s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-681807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-681807
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-681807: (1.179332566s)
--- PASS: TestKubernetesUpgrade (134.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (81.586834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-901906] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-127319 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-127319 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m33.41170245s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-901906 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-901906 --driver=kvm2  --container-runtime=containerd: (1m35.747295147s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-901906 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.875454249s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-901906 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-901906 status -o json: exit status 2 (274.85388ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-901906","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-901906
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-190028 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-190028 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (99.811754ms)

                                                
                                                
-- stdout --
	* [false-190028] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20591
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0403 19:14:27.487795  122337 out.go:345] Setting OutFile to fd 1 ...
	I0403 19:14:27.488525  122337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:14:27.488545  122337 out.go:358] Setting ErrFile to fd 2...
	I0403 19:14:27.488552  122337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0403 19:14:27.489006  122337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20591-80797/.minikube/bin
	I0403 19:14:27.490024  122337 out.go:352] Setting JSON to false
	I0403 19:14:27.490917  122337 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10599,"bootTime":1743697068,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0403 19:14:27.491008  122337 start.go:139] virtualization: kvm guest
	I0403 19:14:27.492592  122337 out.go:177] * [false-190028] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0403 19:14:27.493956  122337 notify.go:220] Checking for updates...
	I0403 19:14:27.493976  122337 out.go:177]   - MINIKUBE_LOCATION=20591
	I0403 19:14:27.495209  122337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0403 19:14:27.496439  122337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20591-80797/kubeconfig
	I0403 19:14:27.497582  122337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20591-80797/.minikube
	I0403 19:14:27.498730  122337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0403 19:14:27.499834  122337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0403 19:14:27.501496  122337 config.go:182] Loaded profile config "NoKubernetes-901906": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0403 19:14:27.501615  122337 config.go:182] Loaded profile config "old-k8s-version-127319": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0403 19:14:27.501703  122337 config.go:182] Loaded profile config "running-upgrade-922230": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0403 19:14:27.501783  122337 driver.go:394] Setting default libvirt URI to qemu:///system
	I0403 19:14:27.537750  122337 out.go:177] * Using the kvm2 driver based on user configuration
	I0403 19:14:27.538926  122337 start.go:297] selected driver: kvm2
	I0403 19:14:27.538942  122337 start.go:901] validating driver "kvm2" against <nil>
	I0403 19:14:27.538957  122337 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0403 19:14:27.540903  122337 out.go:201] 
	W0403 19:14:27.542013  122337 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0403 19:14:27.543203  122337 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-190028 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.88:8443
name: NoKubernetes-901906
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.56:8443
name: old-k8s-version-127319
contexts:
- context:
cluster: NoKubernetes-901906
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-901906
name: NoKubernetes-901906
- context:
cluster: old-k8s-version-127319
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-127319
name: old-k8s-version-127319
current-context: NoKubernetes-901906
kind: Config
preferences: {}
users:
- name: NoKubernetes-901906
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.key
- name: old-k8s-version-127319
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-190028

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-190028"

                                                
                                                
----------------------- debugLogs end: false-190028 [took: 2.868905162s] --------------------------------
helpers_test.go:175: Cleaning up "false-190028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-190028
--- PASS: TestNetworkPlugins/group/false (3.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (60.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-901906 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m0.78347937s)
--- PASS: TestNoKubernetes/serial/Start (60.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-127319 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7afcaba8-6d41-4bba-9b98-d3fd4d790ceb] Pending
helpers_test.go:344: "busybox" [7afcaba8-6d41-4bba-9b98-d3fd4d790ceb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7afcaba8-6d41-4bba-9b98-d3fd4d790ceb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.006916025s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-127319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-127319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-127319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.444137594s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-127319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-127319 --alsologtostderr -v=3
E0403 19:15:26.969962   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-127319 --alsologtostderr -v=3: (1m31.542565958s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-901906 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-901906 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.516222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (71.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1m7.250456835s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.838562591s)
--- PASS: TestNoKubernetes/serial/ProfileList (71.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-901906
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-901906: (1.420243307s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-127319 -n old-k8s-version-127319
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-127319 -n old-k8s-version-127319: exit status 7 (73.080302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-127319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (396.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-127319 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-127319 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (6m35.796131566s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-127319 -n old-k8s-version-127319
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (396.10s)

                                                
                                    
x
+
TestPause/serial/Start (75.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-214704 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0403 19:17:23.283321   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-214704 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m15.160048942s)
--- PASS: TestPause/serial/Start (75.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-418982 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-418982 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m19.606769976s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-214704 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-214704 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (42.62791695s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-418982 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dce49ac5-3498-4ba5-b57c-95342b4facd1] Pending
helpers_test.go:344: "busybox" [dce49ac5-3498-4ba5-b57c-95342b4facd1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dce49ac5-3498-4ba5-b57c-95342b4facd1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003208542s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-418982 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-418982 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-418982 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-418982 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-418982 --alsologtostderr -v=3: (1m31.090683166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-214704 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-214704 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-214704 --output=json --layout=cluster: exit status 2 (245.137785ms)

                                                
                                                
-- stdout --
	{"Name":"pause-214704","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-214704","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-214704 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-214704 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.77s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-214704 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (33.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (33.251529628s)
--- PASS: TestPause/serial/VerifyDeletedResources (33.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312042 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0403 19:20:10.041897   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312042 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m22.172253435s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-843938 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0403 19:20:26.969903   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-843938 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m41.848503859s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982: exit status 7 (75.969813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-418982 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-418982 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-418982 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m19.099891392s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-312042 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7a293044-a9bd-46e0-a532-de0d29be40a4] Pending
helpers_test.go:344: "busybox" [7a293044-a9bd-46e0-a532-de0d29be40a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7a293044-a9bd-46e0-a532-de0d29be40a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.00370717s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-312042 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-312042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-312042 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-312042 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-312042 --alsologtostderr -v=3: (1m31.002966579s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-843938 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d4b3591f-1973-4a95-b2c2-5f45b0bd0484] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d4b3591f-1973-4a95-b2c2-5f45b0bd0484] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004089881s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-843938 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-843938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-843938 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-843938 --alsologtostderr -v=3
E0403 19:22:23.283203   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-843938 --alsologtostderr -v=3: (1m31.017246495s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312042 -n embed-certs-312042
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312042 -n embed-certs-312042: exit status 7 (64.780436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-312042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (310.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-312042 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-312042 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m10.300498909s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-312042 -n embed-certs-312042
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (310.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vmtrj" [af9ed5c4-7b54-4a81-b8d9-d13cd5442ce0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003216383s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vmtrj" [af9ed5c4-7b54-4a81-b8d9-d13cd5442ce0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004632237s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-127319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-127319 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-127319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-127319 -n old-k8s-version-127319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-127319 -n old-k8s-version-127319: exit status 2 (243.544443ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-127319 -n old-k8s-version-127319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-127319 -n old-k8s-version-127319: exit status 2 (250.697541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-127319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-127319 -n old-k8s-version-127319
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-127319 -n old-k8s-version-127319
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843938 -n no-preload-843938
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843938 -n no-preload-843938: exit status 7 (82.113681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-843938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-843938 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-843938 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m39.126643967s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843938 -n no-preload-843938
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xwfc8" [46cf3545-0687-492b-8f04-5642f8087412] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xwfc8" [46cf3545-0687-492b-8f04-5642f8087412] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003575243s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (125.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1987824389 start -p stopped-upgrade-602508 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1987824389 start -p stopped-upgrade-602508 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (55.625237175s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1987824389 -p stopped-upgrade-602508 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1987824389 -p stopped-upgrade-602508 stop: (1.827552303s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-602508 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-602508 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.425534642s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (125.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xwfc8" [46cf3545-0687-492b-8f04-5642f8087412] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00464562s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-418982 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-418982 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-418982 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982: exit status 2 (241.374965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982: exit status 2 (237.905092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-418982 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418982 -n default-k8s-diff-port-418982
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-557425 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0403 19:26:28.864308   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-557425 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m1.02739405s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-557425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-557425 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-557425 --alsologtostderr -v=3: (2.319970331s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-557425 -n newest-cni-557425
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-557425 -n newest-cni-557425: exit status 7 (72.427253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-557425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-557425 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0403 19:27:23.283554   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:27:50.786462   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-557425 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (34.077034627s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-557425 -n newest-cni-557425
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-557425 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-557425 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-557425 -n newest-cni-557425
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-557425 -n newest-cni-557425: exit status 2 (261.786386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-557425 -n newest-cni-557425
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-557425 -n newest-cni-557425: exit status 2 (259.936562ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-557425 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-557425 -n newest-cni-557425
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-557425 -n newest-cni-557425
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (55.906416996s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-602508
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-602508: (1.173762117s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rc8qs" [6ff6ac92-f39c-4377-9980-e6f77b9ba55c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rc8qs" [6ff6ac92-f39c-4377-9980-e6f77b9ba55c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004567267s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m36.42134999s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rc8qs" [6ff6ac92-f39c-4377-9980-e6f77b9ba55c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004484166s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-312042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-312042 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-312042 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312042 -n embed-certs-312042
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312042 -n embed-certs-312042: exit status 2 (236.278016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312042 -n embed-certs-312042
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312042 -n embed-certs-312042: exit status 2 (241.114274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-312042 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-312042 -n embed-certs-312042
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-312042 -n embed-certs-312042
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (126.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0403 19:28:46.364493   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:53.995508   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.001967   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.013453   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.034936   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.076440   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.158055   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.319621   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:54.641768   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:55.283913   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:28:56.566197   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m6.953609046s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (126.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-190028 "pgrep -a kubelet"
I0403 19:28:58.023862   88051 config.go:182] Loaded profile config "auto-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7p9hk" [2dc13a91-4d14-49bc-a1ac-1b69abcb948b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0403 19:28:59.127710   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:29:04.250194   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-7p9hk" [2dc13a91-4d14-49bc-a1ac-1b69abcb948b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004855392s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4j7d8" [762f2d57-7fb3-40fd-8b4a-4d8161566446] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4j7d8" [762f2d57-7fb3-40fd-8b4a-4d8161566446] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.194296498s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m29.855344978s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4j7d8" [762f2d57-7fb3-40fd-8b4a-4d8161566446] Running
E0403 19:29:34.973668   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005260017s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-843938 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-843938 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-843938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843938 -n no-preload-843938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843938 -n no-preload-843938: exit status 2 (283.005268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-843938 -n no-preload-843938
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-843938 -n no-preload-843938: exit status 2 (296.652309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-843938 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843938 -n no-preload-843938
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-843938 -n no-preload-843938
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m35.024229392s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bm9fk" [877d199a-e9d3-4be9-b116-42d313dec59d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003720491s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-190028 "pgrep -a kubelet"
I0403 19:29:53.151413   88051 config.go:182] Loaded profile config "flannel-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7lk94" [b096eed8-16a8-427a-86f7-769fdc4489d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7lk94" [b096eed8-16a8-427a-86f7-769fdc4489d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004263529s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0403 19:30:26.969819   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/functional-138112/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m11.644136289s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-190028 "pgrep -a kubelet"
I0403 19:30:32.380032   88051 config.go:182] Loaded profile config "enable-default-cni-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dnbfn" [fefe8070-4e51-48f7-a951-4c6386423605] Pending
helpers_test.go:344: "netcat-5d86dc444-dnbfn" [fefe8070-4e51-48f7-a951-4c6386423605] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0403 19:30:34.628721   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-dnbfn" [fefe8070-4e51-48f7-a951-4c6386423605] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004463005s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-190028 "pgrep -a kubelet"
I0403 19:30:57.303655   88051 config.go:182] Loaded profile config "bridge-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-s842d" [5175a099-2464-4e78-a64f-0f413af53e06] Pending
helpers_test.go:344: "netcat-5d86dc444-s842d" [5175a099-2464-4e78-a64f-0f413af53e06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-s842d" [5175a099-2464-4e78-a64f-0f413af53e06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004307394s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-190028 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m14.569723206s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7bznd" [2a824386-907b-4da3-a3bb-a15407e9bd15] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004891405s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-190028 "pgrep -a kubelet"
I0403 19:31:24.444844   88051 config.go:182] Loaded profile config "calico-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vzks6" [f7617563-a394-4e5a-bcf4-dab5157b6da9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vzks6" [f7617563-a394-4e5a-bcf4-dab5157b6da9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003629863s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l6prk" [77e2876a-cae0-4fb7-b904-531996dc2fb8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016627377s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-190028 "pgrep -a kubelet"
I0403 19:31:37.099692   88051 config.go:182] Loaded profile config "kindnet-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-190028 replace --force -f testdata/netcat-deployment.yaml
E0403 19:31:37.857784   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/default-k8s-diff-port-418982/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context kindnet-190028 replace --force -f testdata/netcat-deployment.yaml: (1.224729804s)
I0403 19:31:38.357972   88051 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ksg6d" [f848026f-62ba-4938-b90a-304804a12f30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ksg6d" [f848026f-62ba-4938-b90a-304804a12f30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004456804s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-190028 "pgrep -a kubelet"
I0403 19:32:15.843373   88051 config.go:182] Loaded profile config "custom-flannel-190028": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-190028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-2pflt" [93554c80-2b02-4a43-b51b-ef882a88d74f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-2pflt" [93554c80-2b02-4a43-b51b-ef882a88d74f] Running
E0403 19:32:21.312695   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/no-preload-843938/client.crt: no such file or directory" logger="UnhandledError"
E0403 19:32:23.283623   88051 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/addons-245089/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004089614s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-190028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-190028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    

Test skip (39/328)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.14
268 TestNetworkPlugins/group/kubenet 5.2
276 TestNetworkPlugins/group/cilium 3.36
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-498668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-498668
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-190028 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.88:8443
name: NoKubernetes-901906
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.56:8443
name: old-k8s-version-127319
contexts:
- context:
cluster: NoKubernetes-901906
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-901906
name: NoKubernetes-901906
- context:
cluster: old-k8s-version-127319
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-127319
name: old-k8s-version-127319
current-context: NoKubernetes-901906
kind: Config
preferences: {}
users:
- name: NoKubernetes-901906
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.key
- name: old-k8s-version-127319
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-190028

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-190028"

                                                
                                                
----------------------- debugLogs end: kubenet-190028 [took: 5.060714756s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-190028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-190028
--- SKIP: TestNetworkPlugins/group/kubenet (5.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-190028 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-190028" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.88:8443
name: NoKubernetes-901906
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20591-80797/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.56:8443
name: old-k8s-version-127319
contexts:
- context:
cluster: NoKubernetes-901906
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:14:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-901906
name: NoKubernetes-901906
- context:
cluster: old-k8s-version-127319
extensions:
- extension:
last-update: Thu, 03 Apr 2025 19:13:43 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-127319
name: old-k8s-version-127319
current-context: NoKubernetes-901906
kind: Config
preferences: {}
users:
- name: NoKubernetes-901906
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/NoKubernetes-901906/client.key
- name: old-k8s-version-127319
user:
client-certificate: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.crt
client-key: /home/jenkins/minikube-integration/20591-80797/.minikube/profiles/old-k8s-version-127319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-190028

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-190028" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-190028"

                                                
                                                
----------------------- debugLogs end: cilium-190028 [took: 3.197702811s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-190028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-190028
--- SKIP: TestNetworkPlugins/group/cilium (3.36s)

                                                
                                    
Copied to clipboard