Test Report: KVM_Linux_containerd 21753

                    
                      37d7943b58d61ad05591f3a5d0091cda14132c69:2025-10-17:41947
                    
                

Test fail (1/330)

Order failed test Duration
90 TestFunctional/parallel/ConfigCmd 0.36
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 config get cpus: exit status 14 (60.975312ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config get cpus
functional_test.go:1225: expected config error for "out/minikube-linux-amd64 -p functional-088611 config get cpus" to be -""- but got *"E1017 19:08:35.021967   85402 logFile.go:53] failed to close the audit log: invalid argument"*
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 config get cpus: exit status 14 (57.188563ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    

Test pass (290/330)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 31.2
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.14
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 16.82
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.65
22 TestOffline 79.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 209.47
29 TestAddons/serial/Volcano 42.53
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 12.52
35 TestAddons/parallel/Registry 18.51
36 TestAddons/parallel/RegistryCreds 0.94
37 TestAddons/parallel/Ingress 23.7
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 6.31
41 TestAddons/parallel/CSI 50.75
42 TestAddons/parallel/Headlamp 21.01
43 TestAddons/parallel/CloudSpanner 6.62
44 TestAddons/parallel/LocalPath 59.17
45 TestAddons/parallel/NvidiaDevicePlugin 6.88
46 TestAddons/parallel/Yakd 11.91
48 TestAddons/StoppedEnableDisable 87.8
49 TestCertOptions 69.89
50 TestCertExpiration 316.03
52 TestForceSystemdFlag 96.32
53 TestForceSystemdEnv 42.45
55 TestKVMDriverInstallOrUpdate 11.74
59 TestErrorSpam/setup 41.96
60 TestErrorSpam/start 0.36
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.94
64 TestErrorSpam/stop 5.6
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 81.15
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 47.29
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
76 TestFunctional/serial/CacheCmd/cache/add_local 2.58
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 42.16
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.41
87 TestFunctional/serial/LogsFileCmd 1.44
88 TestFunctional/serial/InvalidService 4.55
91 TestFunctional/parallel/DashboardCmd 16.64
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.84
98 TestFunctional/parallel/ServiceCmdConnect 23.54
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 50.55
102 TestFunctional/parallel/SSHCmd 0.44
103 TestFunctional/parallel/CpCmd 1.31
104 TestFunctional/parallel/MySQL 27.71
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.38
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
114 TestFunctional/parallel/License 0.69
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
127 TestFunctional/parallel/ServiceCmd/DeployApp 23.17
128 TestFunctional/parallel/ServiceCmd/List 0.48
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
131 TestFunctional/parallel/ProfileCmd/profile_list 0.35
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
134 TestFunctional/parallel/ServiceCmd/Format 0.32
135 TestFunctional/parallel/MountCmd/any-port 9.56
136 TestFunctional/parallel/ServiceCmd/URL 0.3
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 0.71
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 6.39
144 TestFunctional/parallel/ImageCommands/Setup 2.48
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.44
148 TestFunctional/parallel/MountCmd/specific-port 1.7
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 208.59
162 TestMultiControlPlane/serial/DeployApp 9.8
163 TestMultiControlPlane/serial/PingHostFromPods 1.28
164 TestMultiControlPlane/serial/AddWorkerNode 49.66
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.64
168 TestMultiControlPlane/serial/StopSecondaryNode 90.58
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 30.11
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 372.93
173 TestMultiControlPlane/serial/DeleteSecondaryNode 7.21
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 264.08
176 TestMultiControlPlane/serial/RestartCluster 117.01
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 84.28
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 87.64
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.71
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 1.9
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 87.65
215 TestMountStart/serial/StartWithMountFirst 24.95
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 22.21
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.72
220 TestMountStart/serial/VerifyMountPostDelete 0.37
221 TestMountStart/serial/Stop 1.35
222 TestMountStart/serial/RestartStopped 21.52
223 TestMountStart/serial/VerifyMountPostStop 0.37
226 TestMultiNode/serial/FreshStart2Nodes 106.77
227 TestMultiNode/serial/DeployApp2Nodes 7.15
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 45.67
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.39
233 TestMultiNode/serial/StopNode 2.41
234 TestMultiNode/serial/StartAfterStop 35.36
235 TestMultiNode/serial/RestartKeepsNodes 282.71
236 TestMultiNode/serial/DeleteNode 2.22
237 TestMultiNode/serial/StopMultiNode 152.48
238 TestMultiNode/serial/RestartMultiNode 81.9
239 TestMultiNode/serial/ValidateNameConflict 44.24
244 TestPreload 134.77
246 TestScheduledStopUnix 113.21
250 TestRunningBinaryUpgrade 154.94
252 TestKubernetesUpgrade 144.86
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 85.85
264 TestNetworkPlugins/group/false 3.2
268 TestNoKubernetes/serial/StartWithStopK8s 67.43
269 TestNoKubernetes/serial/Start 58.53
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
271 TestNoKubernetes/serial/ProfileList 8.06
272 TestNoKubernetes/serial/Stop 1.52
273 TestNoKubernetes/serial/StartNoArgs 24.11
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
275 TestStoppedBinaryUpgrade/Setup 3.72
276 TestStoppedBinaryUpgrade/Upgrade 101.15
285 TestPause/serial/Start 55.96
286 TestNetworkPlugins/group/auto/Start 86.56
287 TestPause/serial/SecondStartNoReconfiguration 52.44
288 TestStoppedBinaryUpgrade/MinikubeLogs 1.49
289 TestNetworkPlugins/group/kindnet/Start 65.6
290 TestNetworkPlugins/group/calico/Start 98.93
291 TestNetworkPlugins/group/auto/KubeletFlags 0.25
292 TestNetworkPlugins/group/auto/NetCatPod 12.33
293 TestPause/serial/Pause 1.2
294 TestPause/serial/VerifyStatus 0.28
295 TestPause/serial/Unpause 1.18
296 TestPause/serial/PauseAgain 1.04
297 TestPause/serial/DeletePaused 1.33
298 TestPause/serial/VerifyDeletedResources 0.67
299 TestNetworkPlugins/group/custom-flannel/Start 85.42
300 TestNetworkPlugins/group/auto/DNS 0.15
301 TestNetworkPlugins/group/auto/Localhost 0.14
302 TestNetworkPlugins/group/auto/HairPin 0.15
303 TestNetworkPlugins/group/enable-default-cni/Start 100.95
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
306 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
307 TestNetworkPlugins/group/kindnet/DNS 0.18
308 TestNetworkPlugins/group/kindnet/Localhost 0.14
309 TestNetworkPlugins/group/kindnet/HairPin 0.14
310 TestNetworkPlugins/group/flannel/Start 92.52
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.24
313 TestNetworkPlugins/group/calico/NetCatPod 11.31
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.3
316 TestNetworkPlugins/group/calico/DNS 0.16
317 TestNetworkPlugins/group/calico/Localhost 0.13
318 TestNetworkPlugins/group/calico/HairPin 0.15
319 TestNetworkPlugins/group/custom-flannel/DNS 0.24
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
322 TestNetworkPlugins/group/bridge/Start 96.37
324 TestStartStop/group/old-k8s-version/serial/FirstStart 110.9
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.49
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.37
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
331 TestStartStop/group/no-preload/serial/FirstStart 102.36
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
334 TestNetworkPlugins/group/flannel/NetCatPod 9.29
335 TestNetworkPlugins/group/flannel/DNS 0.16
336 TestNetworkPlugins/group/flannel/Localhost 0.13
337 TestNetworkPlugins/group/flannel/HairPin 0.13
339 TestStartStop/group/embed-certs/serial/FirstStart 90.21
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
341 TestNetworkPlugins/group/bridge/NetCatPod 10.26
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.13
344 TestNetworkPlugins/group/bridge/HairPin 0.14
345 TestStartStop/group/old-k8s-version/serial/DeployApp 12.4
347 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.66
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.35
349 TestStartStop/group/old-k8s-version/serial/Stop 88.47
350 TestStartStop/group/no-preload/serial/DeployApp 13.32
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
352 TestStartStop/group/no-preload/serial/Stop 85.31
353 TestStartStop/group/embed-certs/serial/DeployApp 11.28
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
355 TestStartStop/group/embed-certs/serial/Stop 88.04
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.38
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
358 TestStartStop/group/old-k8s-version/serial/SecondStart 40.61
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.01
361 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
362 TestStartStop/group/no-preload/serial/SecondStart 43.88
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.03
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/old-k8s-version/serial/Pause 3.1
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
368 TestStartStop/group/embed-certs/serial/SecondStart 44.19
370 TestStartStop/group/newest-cni/serial/FirstStart 64.39
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
374 TestStartStop/group/no-preload/serial/Pause 3.43
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
376 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.79
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
380 TestStartStop/group/embed-certs/serial/Pause 3.23
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
383 TestStartStop/group/newest-cni/serial/Stop 2.08
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
385 TestStartStop/group/newest-cni/serial/SecondStart 33.39
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 3
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
393 TestStartStop/group/newest-cni/serial/Pause 2.8
x
+
TestDownloadOnly/v1.28.0/json-events (31.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-874303 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-874303 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (31.195285383s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (31.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 18:57:09.612721   78783 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1017 18:57:09.612841   78783 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-74819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-874303
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-874303: exit status 85 (62.57466ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                      ARGS                                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-874303 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-874303 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:56:38
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:56:38.459353   78795 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:56:38.460061   78795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:38.460078   78795 out.go:374] Setting ErrFile to fd 2...
	I1017 18:56:38.460086   78795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:56:38.460872   78795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	W1017 18:56:38.461091   78795 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21753-74819/.minikube/config/config.json: open /home/jenkins/minikube-integration/21753-74819/.minikube/config/config.json: no such file or directory
	I1017 18:56:38.461660   78795 out.go:368] Setting JSON to true
	I1017 18:56:38.462644   78795 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5938,"bootTime":1760721460,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:56:38.462734   78795 start.go:141] virtualization: kvm guest
	I1017 18:56:38.464782   78795 out.go:99] [download-only-874303] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1017 18:56:38.464899   78795 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21753-74819/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 18:56:38.464965   78795 notify.go:220] Checking for updates...
	I1017 18:56:38.466036   78795 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:56:38.467219   78795 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:56:38.468558   78795 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	I1017 18:56:38.469833   78795 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	I1017 18:56:38.471120   78795 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:56:38.473062   78795 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:56:38.473368   78795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:56:38.506031   78795 out.go:99] Using the kvm2 driver based on user configuration
	I1017 18:56:38.506096   78795 start.go:305] selected driver: kvm2
	I1017 18:56:38.506109   78795 start.go:925] validating driver "kvm2" against <nil>
	I1017 18:56:38.506535   78795 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:38.506654   78795 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-74819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:38.520429   78795 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:38.520454   78795 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-74819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:56:38.533607   78795 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:56:38.533665   78795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:56:38.534300   78795 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 18:56:38.534475   78795 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:56:38.534502   78795 cni.go:84] Creating CNI manager for ""
	I1017 18:56:38.534568   78795 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1017 18:56:38.534580   78795 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 18:56:38.534649   78795 start.go:349] cluster config:
	{Name:download-only-874303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-874303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:56:38.534946   78795 iso.go:125] acquiring lock: {Name:mk33c243d2b00e80c564bda6122868528d14c3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:56:38.536747   78795 out.go:99] Downloading VM boot image ...
	I1017 18:56:38.536789   78795 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21753-74819/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1017 18:56:52.824270   78795 out.go:99] Starting "download-only-874303" primary control-plane node in "download-only-874303" cluster
	I1017 18:56:52.824295   78795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1017 18:56:52.978153   78795 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1017 18:56:52.978212   78795 cache.go:58] Caching tarball of preloaded images
	I1017 18:56:52.978426   78795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1017 18:56:52.980082   78795 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1017 18:56:52.980106   78795 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1017 18:56:53.132477   78795 preload.go:290] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1017 18:56:53.132596   78795 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21753-74819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-874303 host does not exist
	  To start a cluster, run: "minikube start -p download-only-874303"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-874303
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (16.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-426587 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-426587 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (16.821838973s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (16.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 18:57:26.774638   78783 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1017 18:57:26.774700   78783 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21753-74819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-426587
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-426587: exit status 85 (62.784605ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                      ARGS                                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874303 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-874303 │ jenkins │ v1.37.0 │ 17 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                           │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 18:57 UTC │ 17 Oct 25 18:57 UTC │
	│ delete  │ -p download-only-874303                                                                                                                                                                                         │ download-only-874303 │ jenkins │ v1.37.0 │ 17 Oct 25 18:57 UTC │ 17 Oct 25 18:57 UTC │
	│ start   │ -o=json --download-only -p download-only-426587 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false │ download-only-426587 │ jenkins │ v1.37.0 │ 17 Oct 25 18:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 18:57:09
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 18:57:09.994262   79065 out.go:360] Setting OutFile to fd 1 ...
	I1017 18:57:09.994375   79065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:57:09.994383   79065 out.go:374] Setting ErrFile to fd 2...
	I1017 18:57:09.994388   79065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 18:57:09.994614   79065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 18:57:09.995123   79065 out.go:368] Setting JSON to true
	I1017 18:57:09.995975   79065 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5970,"bootTime":1760721460,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 18:57:09.996086   79065 start.go:141] virtualization: kvm guest
	I1017 18:57:09.997865   79065 out.go:99] [download-only-426587] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 18:57:09.998032   79065 notify.go:220] Checking for updates...
	I1017 18:57:09.999220   79065 out.go:171] MINIKUBE_LOCATION=21753
	I1017 18:57:10.000716   79065 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 18:57:10.001832   79065 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	I1017 18:57:10.002964   79065 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	I1017 18:57:10.004067   79065 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1017 18:57:10.006093   79065 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 18:57:10.006337   79065 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 18:57:10.037115   79065 out.go:99] Using the kvm2 driver based on user configuration
	I1017 18:57:10.037162   79065 start.go:305] selected driver: kvm2
	I1017 18:57:10.037168   79065 start.go:925] validating driver "kvm2" against <nil>
	I1017 18:57:10.037488   79065 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:57:10.037575   79065 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-74819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:57:10.051184   79065 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:57:10.051220   79065 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21753-74819/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1017 18:57:10.064709   79065 install.go:163] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1017 18:57:10.064769   79065 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 18:57:10.065530   79065 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1017 18:57:10.065750   79065 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 18:57:10.065794   79065 cni.go:84] Creating CNI manager for ""
	I1017 18:57:10.065854   79065 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1017 18:57:10.065865   79065 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1017 18:57:10.065933   79065 start.go:349] cluster config:
	{Name:download-only-426587 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-426587 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 18:57:10.066085   79065 iso.go:125] acquiring lock: {Name:mk33c243d2b00e80c564bda6122868528d14c3f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 18:57:10.067701   79065 out.go:99] Starting "download-only-426587" primary control-plane node in "download-only-426587" cluster
	I1017 18:57:10.067720   79065 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1017 18:57:10.704648   79065 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1017 18:57:10.704697   79065 cache.go:58] Caching tarball of preloaded images
	I1017 18:57:10.704940   79065 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1017 18:57:10.706883   79065 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1017 18:57:10.706935   79065 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1017 18:57:10.862341   79065 preload.go:290] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1017 18:57:10.862391   79065 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21753-74819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-426587 host does not exist
	  To start a cluster, run: "minikube start -p download-only-426587"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-426587
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 18:57:27.376767   78783 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-035417 --alsologtostderr --binary-mirror http://127.0.0.1:37115 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-035417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-035417
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (79.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-961222 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-961222 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m18.374483646s)
helpers_test.go:175: Cleaning up "offline-containerd-961222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-961222
--- PASS: TestOffline (79.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-574638
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-574638: exit status 85 (56.162768ms)

                                                
                                                
-- stdout --
	* Profile "addons-574638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-574638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-574638
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-574638: exit status 85 (56.817216ms)

                                                
                                                
-- stdout --
	* Profile "addons-574638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-574638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (209.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-574638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-574638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m29.474163514s)
--- PASS: TestAddons/Setup (209.47s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.53s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:868: volcano-scheduler stabilized in 22.337057ms
addons_test.go:876: volcano-admission stabilized in 22.784218ms
addons_test.go:884: volcano-controller stabilized in 26.612747ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-q575f" [2d87e936-6af0-4724-9d0f-a3d886169877] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004909324s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-5wgkq" [8326a065-31e5-4784-94d4-5731de9123af] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005469879s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-4z5l5" [d6d57f04-3cee-4f14-ae83-850aae10884d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004014599s
addons_test.go:903: (dbg) Run:  kubectl --context addons-574638 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-574638 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-574638 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [799f43c3-7fa8-4173-bd4f-07c044ebdc2c] Pending
helpers_test.go:352: "test-job-nginx-0" [799f43c3-7fa8-4173-bd4f-07c044ebdc2c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [799f43c3-7fa8-4173-bd4f-07c044ebdc2c] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.005470624s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable volcano --alsologtostderr -v=1: (12.077715518s)
--- PASS: TestAddons/serial/Volcano (42.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-574638 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-574638 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-574638 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-574638 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3674ad7d-b226-466c-a6c8-ad75dbdc551f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3674ad7d-b226-466c-a6c8-ad75dbdc551f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.0039934s
addons_test.go:694: (dbg) Run:  kubectl --context addons-574638 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-574638 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-574638 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.737512ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-7l8z8" [49e71b33-b5bb-4ba3-85e2-ad491745703b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005472183s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-g6h2r" [e5999262-c49b-4c8a-8499-e575e3bdf6db] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003759843s
addons_test.go:392: (dbg) Run:  kubectl --context addons-574638 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-574638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-574638 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.608188153s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 ip
2025/10/17 19:02:19 [DEBUG] GET http://192.168.39.78:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.51s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.94s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.656836ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-574638
addons_test.go:332: (dbg) Run:  kubectl --context addons-574638 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-574638 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-574638 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-574638 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5d82968d-143a-489f-8456-3d2193647d9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5d82968d-143a-489f-8456-3d2193647d9c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.003939528s
I1017 19:02:36.231765   78783 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-574638 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.78
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable ingress-dns --alsologtostderr -v=1: (1.416100074s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable ingress --alsologtostderr -v=1: (7.941228018s)
--- PASS: TestAddons/parallel/Ingress (23.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vwgqt" [54803eac-0466-46c8-aad9-d797eb10cb4b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007737467s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.7975ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lxx9s" [0c29ae04-84d1-4c81-ac5d-fe651e99b2cd] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004275705s
addons_test.go:463: (dbg) Run:  kubectl --context addons-574638 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable metrics-server --alsologtostderr -v=1: (1.213409921s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 19:02:15.267597   78783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 19:02:15.279711   78783 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 19:02:15.279755   78783 kapi.go:107] duration metric: took 12.173951ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 12.190388ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-574638 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-574638 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b03f1e4b-6329-4b2a-b27d-97e4aae37af3] Pending
helpers_test.go:352: "task-pv-pod" [b03f1e4b-6329-4b2a-b27d-97e4aae37af3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b03f1e4b-6329-4b2a-b27d-97e4aae37af3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.006246197s
addons_test.go:572: (dbg) Run:  kubectl --context addons-574638 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-574638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-574638 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-574638 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-574638 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-574638 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-574638 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [76568b58-79b3-40f0-a3ab-7bb39fe23684] Pending
helpers_test.go:352: "task-pv-pod-restore" [76568b58-79b3-40f0-a3ab-7bb39fe23684] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [76568b58-79b3-40f0-a3ab-7bb39fe23684] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004566285s
addons_test.go:614: (dbg) Run:  kubectl --context addons-574638 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-574638 delete pod task-pv-pod-restore: (1.323432209s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-574638 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-574638 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.048432822s)
--- PASS: TestAddons/parallel/CSI (50.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-574638 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-bpnbh" [e7cbd9cf-d881-407f-bac3-6588c4ca56e9] Pending
helpers_test.go:352: "headlamp-6945c6f4d-bpnbh" [e7cbd9cf-d881-407f-bac3-6588c4ca56e9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-bpnbh" [e7cbd9cf-d881-407f-bac3-6588c4ca56e9] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-bpnbh" [e7cbd9cf-d881-407f-bac3-6588c4ca56e9] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005067615s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable headlamp --alsologtostderr -v=1: (6.093742096s)
--- PASS: TestAddons/parallel/Headlamp (21.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-b999b" [2fdc6d3b-91cd-4d9c-a19d-c90eca50a0c3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003908109s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-574638 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-574638 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b074a54d-3a29-42e0-804c-96753313c05e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b074a54d-3a29-42e0-804c-96753313c05e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b074a54d-3a29-42e0-804c-96753313c05e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.009067249s
addons_test.go:967: (dbg) Run:  kubectl --context addons-574638 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 ssh "cat /opt/local-path-provisioner/pvc-2777c391-11a8-482e-9c2e-fd9e56ab8af8_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-574638 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-574638 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.154999163s)
--- PASS: TestAddons/parallel/LocalPath (59.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fhp87" [61183509-88cb-4f8b-bb48-325a13e8bf20] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.108403229s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-fksmn" [60204a14-fcd3-4a45-bb9e-7e154ca59ff3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004857406s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-574638 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-574638 addons disable yakd --alsologtostderr -v=1: (5.905321319s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-574638
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-574638: (1m27.523686329s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-574638
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-574638
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-574638
--- PASS: TestAddons/StoppedEnableDisable (87.80s)

                                                
                                    
x
+
TestCertOptions (69.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-054610 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-054610 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m8.522882578s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-054610 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-054610 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-054610 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-054610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-054610
--- PASS: TestCertOptions (69.89s)

                                                
                                    
x
+
TestCertExpiration (316.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-600793 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-600793 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m22.173817788s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-600793 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-600793 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (52.978936565s)
helpers_test.go:175: Cleaning up "cert-expiration-600793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-600793
--- PASS: TestCertExpiration (316.03s)

                                                
                                    
x
+
TestForceSystemdFlag (96.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-081726 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-081726 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m35.026863261s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-081726 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-081726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-081726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-081726: (1.043331504s)
--- PASS: TestForceSystemdFlag (96.32s)

                                                
                                    
x
+
TestForceSystemdEnv (42.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-007257 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-007257 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (41.181836721s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-007257 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-007257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-007257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-007257: (1.038241055s)
--- PASS: TestForceSystemdEnv (42.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (11.74s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1017 19:53:23.107998   78783 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1017 19:53:23.108243   78783 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4055960312/001:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 19:53:23.145118   78783 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4055960312/001/docker-machine-driver-kvm2 version is 1.1.1
W1017 19:53:23.145167   78783 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1017 19:53:23.145367   78783 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1017 19:53:23.145444   78783 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4055960312/001/docker-machine-driver-kvm2
I1017 19:53:34.697868   78783 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4055960312/001:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1017 19:53:34.714511   78783 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4055960312/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (11.74s)

                                                
                                    
x
+
TestErrorSpam/setup (41.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-695379 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-695379 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-695379 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-695379 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (41.961416121s)
--- PASS: TestErrorSpam/setup (41.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (5.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop: (1.754556022s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop: (2.002498892s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-695379 --log_dir /tmp/nospam-695379 stop: (1.840420261s)
--- PASS: TestErrorSpam/stop (5.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21753-74819/.minikube/files/etc/test/nested/copy/78783/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:05:57.564804   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.571321   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.582744   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.604175   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.645655   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.727196   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:57.888809   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:58.210558   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:05:58.852619   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:06:00.134257   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:06:02.697154   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:06:07.818692   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:06:18.060470   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:06:38.542374   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-088611 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m21.146259388s)
--- PASS: TestFunctional/serial/StartWithProxy (81.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 19:06:49.955537   78783 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --alsologtostderr -v=8
E1017 19:07:19.504730   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-088611 --alsologtostderr -v=8: (47.284028223s)
functional_test.go:678: soft start took 47.284759752s for "functional-088611" cluster.
I1017 19:07:37.240045   78783 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (47.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-088611 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 cache add registry.k8s.io/pause:3.1: (1.048235323s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 cache add registry.k8s.io/pause:3.3: (1.004916937s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-088611 /tmp/TestFunctionalserialCacheCmdcacheadd_local554782785/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache add minikube-local-cache-test:functional-088611
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 cache add minikube-local-cache-test:functional-088611: (2.246308233s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache delete minikube-local-cache-test:functional-088611
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-088611
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.927969ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 kubectl -- --context functional-088611 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-088611 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-088611 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.158020545s)
functional_test.go:776: restart took 42.158138252s for "functional-088611" cluster.
I1017 19:08:27.314637   78783 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-088611 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 logs: (1.408620748s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 logs --file /tmp/TestFunctionalserialLogsFileCmd4265513261/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 logs --file /tmp/TestFunctionalserialLogsFileCmd4265513261/001/logs.txt: (1.435122667s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-088611 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-088611
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-088611: exit status 115 (285.989638ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.211:30737 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-088611 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-088611 delete -f testdata/invalidsvc.yaml: (1.050127649s)
--- PASS: TestFunctional/serial/InvalidService (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-088611 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-088611 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 86886: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-088611 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 23 (150.958585ms)

                                                
                                                
-- stdout --
	* [functional-088611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:09:02.428329   86712 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:09:02.428675   86712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:09:02.428688   86712 out.go:374] Setting ErrFile to fd 2...
	I1017 19:09:02.428696   86712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:09:02.429043   86712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:09:02.429720   86712 out.go:368] Setting JSON to false
	I1017 19:09:02.431056   86712 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6682,"bootTime":1760721460,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:09:02.431189   86712 start.go:141] virtualization: kvm guest
	I1017 19:09:02.433866   86712 out.go:179] * [functional-088611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:09:02.435646   86712 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:09:02.435686   86712 notify.go:220] Checking for updates...
	I1017 19:09:02.438121   86712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:09:02.439281   86712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	I1017 19:09:02.440342   86712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	I1017 19:09:02.441435   86712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:09:02.442569   86712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:09:02.444438   86712 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:09:02.445034   86712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:09:02.445127   86712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:09:02.461099   86712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I1017 19:09:02.461590   86712 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:09:02.462243   86712 main.go:141] libmachine: Using API Version  1
	I1017 19:09:02.462270   86712 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:09:02.462685   86712 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:09:02.462938   86712 main.go:141] libmachine: (functional-088611) Calling .DriverName
	I1017 19:09:02.463247   86712 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:09:02.463590   86712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:09:02.463646   86712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:09:02.477799   86712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I1017 19:09:02.478266   86712 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:09:02.478803   86712 main.go:141] libmachine: Using API Version  1
	I1017 19:09:02.478831   86712 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:09:02.479214   86712 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:09:02.479401   86712 main.go:141] libmachine: (functional-088611) Calling .DriverName
	I1017 19:09:02.512947   86712 out.go:179] * Using the kvm2 driver based on existing profile
	I1017 19:09:02.514161   86712 start.go:305] selected driver: kvm2
	I1017 19:09:02.514178   86712 start.go:925] validating driver "kvm2" against &{Name:functional-088611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-088611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:09:02.514320   86712 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:09:02.516766   86712 out.go:203] 
	W1017 19:09:02.518387   86712 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 19:09:02.519489   86712 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-088611 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-088611 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 23 (142.164665ms)

                                                
                                                
-- stdout --
	* [functional-088611] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:09:02.275830   86666 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:09:02.276157   86666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:09:02.276169   86666 out.go:374] Setting ErrFile to fd 2...
	I1017 19:09:02.276173   86666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:09:02.276558   86666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:09:02.277230   86666 out.go:368] Setting JSON to false
	I1017 19:09:02.278213   86666 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":6682,"bootTime":1760721460,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:09:02.278319   86666 start.go:141] virtualization: kvm guest
	I1017 19:09:02.280422   86666 out.go:179] * [functional-088611] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1017 19:09:02.282176   86666 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:09:02.282187   86666 notify.go:220] Checking for updates...
	I1017 19:09:02.284552   86666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:09:02.285779   86666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	I1017 19:09:02.286998   86666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	I1017 19:09:02.288157   86666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:09:02.289403   86666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:09:02.291147   86666 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:09:02.291789   86666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:09:02.291840   86666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:09:02.309891   86666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I1017 19:09:02.310541   86666 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:09:02.311321   86666 main.go:141] libmachine: Using API Version  1
	I1017 19:09:02.311349   86666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:09:02.311930   86666 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:09:02.312179   86666 main.go:141] libmachine: (functional-088611) Calling .DriverName
	I1017 19:09:02.312467   86666 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:09:02.312879   86666 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:09:02.312928   86666 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:09:02.327455   86666 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44463
	I1017 19:09:02.327908   86666 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:09:02.328340   86666 main.go:141] libmachine: Using API Version  1
	I1017 19:09:02.328368   86666 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:09:02.328715   86666 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:09:02.328897   86666 main.go:141] libmachine: (functional-088611) Calling .DriverName
	I1017 19:09:02.360708   86666 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1017 19:09:02.362053   86666 start.go:305] selected driver: kvm2
	I1017 19:09:02.362073   86666 start.go:925] validating driver "kvm2" against &{Name:functional-088611 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-088611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:09:02.362214   86666 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:09:02.364508   86666 out.go:203] 
	W1017 19:09:02.365830   86666 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 19:09:02.367075   86666 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-088611 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-088611 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-w2bn6" [0380e52e-bea2-44a4-ae71-62a570e6262a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-w2bn6" [0380e52e-bea2-44a4-ae71-62a570e6262a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.00333543s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.211:31547
functional_test.go:1680: http://192.168.39.211:31547: success! body:
Request served by hello-node-connect-7d85dfc575-w2bn6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.211:31547
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ad817745-acd7-4397-99d5-01a42db48bcc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005282388s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-088611 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-088611 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-088611 get pvc myclaim -o=json
I1017 19:08:41.933543   78783 retry.go:31] will retry after 1.087835094s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:10726891-972c-4481-b2c7-7249135279aa ResourceVersion:769 Generation:0 CreationTimestamp:2025-10-17 19:08:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018e92a0 VolumeMode:0xc0018e92b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-088611 get pvc myclaim -o=json
I1017 19:08:43.084287   78783 retry.go:31] will retry after 2.14684461s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:10726891-972c-4481-b2c7-7249135279aa ResourceVersion:769 Generation:0 CreationTimestamp:2025-10-17 19:08:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017a2390 VolumeMode:0xc0017a23a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-088611 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-088611 apply -f testdata/storage-provisioner/pod.yaml
I1017 19:08:45.456786   78783 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [44531c9a-ed1f-4e50-932f-8dc402b4db43] Pending
helpers_test.go:352: "sp-pod" [44531c9a-ed1f-4e50-932f-8dc402b4db43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [44531c9a-ed1f-4e50-932f-8dc402b4db43] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003630136s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-088611 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-088611 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-088611 delete -f testdata/storage-provisioner/pod.yaml: (1.310747142s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-088611 apply -f testdata/storage-provisioner/pod.yaml
I1017 19:09:08.086311   78783 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [57385ef0-6ae0-4499-ad7b-ba0d841c1a24] Pending
helpers_test.go:352: "sp-pod" [57385ef0-6ae0-4499-ad7b-ba0d841c1a24] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [57385ef0-6ae0-4499-ad7b-ba0d841c1a24] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004444766s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-088611 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh -n functional-088611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cp functional-088611:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1240675949/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh -n functional-088611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh -n functional-088611 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-088611 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-v45mn" [f017dd3c-d1dd-4036-a4a3-f799b2c53f95] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-v45mn" [f017dd3c-d1dd-4036-a4a3-f799b2c53f95] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.011508806s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;": exit status 1 (230.902396ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:08:57.262662   78783 retry.go:31] will retry after 1.484577895s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;": exit status 1 (146.163461ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:08:58.894189   78783 retry.go:31] will retry after 1.553872036s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;": exit status 1 (164.20289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1017 19:09:00.612757   78783 retry.go:31] will retry after 1.708948974s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-088611 exec mysql-5bb876957f-v45mn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/78783/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /etc/test/nested/copy/78783/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/78783.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /etc/ssl/certs/78783.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/78783.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /usr/share/ca-certificates/78783.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/787832.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /etc/ssl/certs/787832.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/787832.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /usr/share/ca-certificates/787832.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-088611 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "sudo systemctl is-active docker": exit status 1 (232.056801ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "sudo systemctl is-active crio": exit status 1 (249.397743ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-088611 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-088611 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-26f2d" [6059ee6e-b81f-4758-b6c6-f0f7981be52c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1017 19:08:41.426893   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "hello-node-75c85bcc94-26f2d" [6059ee6e-b81f-4758-b6c6-f0f7981be52c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.008438991s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service list -o json
functional_test.go:1504: Took "456.707318ms" to run "out/minikube-linux-amd64 -p functional-088611 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "291.370969ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.10174ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.211:30097
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "296.430378ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.578296ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdany-port4245622562/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760728140907786548" to /tmp/TestFunctionalparallelMountCmdany-port4245622562/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760728140907786548" to /tmp/TestFunctionalparallelMountCmdany-port4245622562/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760728140907786548" to /tmp/TestFunctionalparallelMountCmdany-port4245622562/001/test-1760728140907786548
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.706116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:09:01.115829   78783 retry.go:31] will retry after 554.531334ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 19:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 19:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 19:09 test-1760728140907786548
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh cat /mount-9p/test-1760728140907786548
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-088611 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6ac041dc-c6fc-4d81-8f5c-0c0282d954a9] Pending
helpers_test.go:352: "busybox-mount" [6ac041dc-c6fc-4d81-8f5c-0c0282d954a9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6ac041dc-c6fc-4d81-8f5c-0c0282d954a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6ac041dc-c6fc-4d81-8f5c-0c0282d954a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004065397s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-088611 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdany-port4245622562/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.211:30097
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-088611 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-088611
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-088611
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-088611 image ls --format short --alsologtostderr:
I1017 19:09:14.239381   87846 out.go:360] Setting OutFile to fd 1 ...
I1017 19:09:14.239660   87846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.239671   87846 out.go:374] Setting ErrFile to fd 2...
I1017 19:09:14.239678   87846 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.239900   87846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
I1017 19:09:14.240506   87846 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.240623   87846 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.241073   87846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.241155   87846 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.256237   87846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
I1017 19:09:14.256706   87846 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.257259   87846 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.257283   87846 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.257734   87846 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.257921   87846 main.go:141] libmachine: (functional-088611) Calling .GetState
I1017 19:09:14.259913   87846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.259956   87846 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.273665   87846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44515
I1017 19:09:14.274211   87846 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.274851   87846 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.274896   87846 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.275336   87846 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.275543   87846 main.go:141] libmachine: (functional-088611) Calling .DriverName
I1017 19:09:14.275818   87846 ssh_runner.go:195] Run: systemctl --version
I1017 19:09:14.275850   87846 main.go:141] libmachine: (functional-088611) Calling .GetSSHHostname
I1017 19:09:14.279337   87846 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.279857   87846 main.go:141] libmachine: (functional-088611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:28:02", ip: ""} in network mk-functional-088611: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:44 +0000 UTC Type:0 Mac:52:54:00:09:28:02 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-088611 Clientid:01:52:54:00:09:28:02}
I1017 19:09:14.279890   87846 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined IP address 192.168.39.211 and MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.280101   87846 main.go:141] libmachine: (functional-088611) Calling .GetSSHPort
I1017 19:09:14.280273   87846 main.go:141] libmachine: (functional-088611) Calling .GetSSHKeyPath
I1017 19:09:14.280470   87846 main.go:141] libmachine: (functional-088611) Calling .GetSSHUsername
I1017 19:09:14.280659   87846 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/functional-088611/id_rsa Username:docker}
I1017 19:09:14.371486   87846 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:09:14.460839   87846 main.go:141] libmachine: Making call to close driver server
I1017 19:09:14.460853   87846 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:14.461190   87846 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:14.461212   87846 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:14.461228   87846 main.go:141] libmachine: Making call to close driver server
I1017 19:09:14.461237   87846 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:14.461577   87846 main.go:141] libmachine: (functional-088611) DBG | Closing plugin on server side
I1017 19:09:14.461621   87846 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:14.461637   87846 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-088611 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/library/nginx                     │ latest             │ sha256:07ccdb │ 62.7MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-088611  │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-088611  │ sha256:9169a5 │ 990B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-088611 image ls --format table --alsologtostderr:
I1017 19:09:15.199429   87984 out.go:360] Setting OutFile to fd 1 ...
I1017 19:09:15.199802   87984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:15.199815   87984 out.go:374] Setting ErrFile to fd 2...
I1017 19:09:15.199819   87984 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:15.200049   87984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
I1017 19:09:15.200678   87984 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:15.200772   87984 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:15.201259   87984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:15.201314   87984 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:15.215881   87984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40633
I1017 19:09:15.216360   87984 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:15.217028   87984 main.go:141] libmachine: Using API Version  1
I1017 19:09:15.217057   87984 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:15.217456   87984 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:15.217682   87984 main.go:141] libmachine: (functional-088611) Calling .GetState
I1017 19:09:15.220156   87984 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:15.220206   87984 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:15.234386   87984 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
I1017 19:09:15.234852   87984 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:15.235346   87984 main.go:141] libmachine: Using API Version  1
I1017 19:09:15.235373   87984 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:15.235703   87984 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:15.235902   87984 main.go:141] libmachine: (functional-088611) Calling .DriverName
I1017 19:09:15.236125   87984 ssh_runner.go:195] Run: systemctl --version
I1017 19:09:15.236153   87984 main.go:141] libmachine: (functional-088611) Calling .GetSSHHostname
I1017 19:09:15.239469   87984 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:15.239966   87984 main.go:141] libmachine: (functional-088611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:28:02", ip: ""} in network mk-functional-088611: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:44 +0000 UTC Type:0 Mac:52:54:00:09:28:02 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-088611 Clientid:01:52:54:00:09:28:02}
I1017 19:09:15.240021   87984 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined IP address 192.168.39.211 and MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:15.240157   87984 main.go:141] libmachine: (functional-088611) Calling .GetSSHPort
I1017 19:09:15.240346   87984 main.go:141] libmachine: (functional-088611) Calling .GetSSHKeyPath
I1017 19:09:15.240519   87984 main.go:141] libmachine: (functional-088611) Calling .GetSSHUsername
I1017 19:09:15.240644   87984 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/functional-088611/id_rsa Username:docker}
I1017 19:09:15.350538   87984 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:09:15.433911   87984 main.go:141] libmachine: Making call to close driver server
I1017 19:09:15.433937   87984 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:15.434251   87984 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:15.434272   87984 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:15.434271   87984 main.go:141] libmachine: (functional-088611) DBG | Closing plugin on server side
I1017 19:09:15.434281   87984 main.go:141] libmachine: Making call to close driver server
I1017 19:09:15.434288   87984 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:15.434520   87984 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:15.434533   87984 main.go:141] libmachine: Making call to close connection to plugin binary
2025/10/17 19:09:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-088611 image ls --format json --alsologtostderr:
[{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:9169a5e2990ef94deb85f79540c096f4b9d20958866808df24de0d047750ca81","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-088611"],"size":"990"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{
"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-088611"],"size":"2372971"},{"id":"s
ha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"62706233"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDig
ests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-088611 image ls --format json --alsologtostderr:
I1017 19:09:14.917273   87932 out.go:360] Setting OutFile to fd 1 ...
I1017 19:09:14.917552   87932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.917564   87932 out.go:374] Setting ErrFile to fd 2...
I1017 19:09:14.917569   87932 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.917784   87932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
I1017 19:09:14.918437   87932 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.918542   87932 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.918939   87932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.919023   87932 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.933237   87932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41035
I1017 19:09:14.933840   87932 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.934510   87932 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.934541   87932 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.934887   87932 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.935145   87932 main.go:141] libmachine: (functional-088611) Calling .GetState
I1017 19:09:14.937063   87932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.937116   87932 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.951239   87932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41341
I1017 19:09:14.951752   87932 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.952272   87932 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.952291   87932 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.952846   87932 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.953059   87932 main.go:141] libmachine: (functional-088611) Calling .DriverName
I1017 19:09:14.953260   87932 ssh_runner.go:195] Run: systemctl --version
I1017 19:09:14.953287   87932 main.go:141] libmachine: (functional-088611) Calling .GetSSHHostname
I1017 19:09:14.956439   87932 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.956933   87932 main.go:141] libmachine: (functional-088611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:28:02", ip: ""} in network mk-functional-088611: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:44 +0000 UTC Type:0 Mac:52:54:00:09:28:02 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-088611 Clientid:01:52:54:00:09:28:02}
I1017 19:09:14.956967   87932 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined IP address 192.168.39.211 and MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.957091   87932 main.go:141] libmachine: (functional-088611) Calling .GetSSHPort
I1017 19:09:14.957259   87932 main.go:141] libmachine: (functional-088611) Calling .GetSSHKeyPath
I1017 19:09:14.957390   87932 main.go:141] libmachine: (functional-088611) Calling .GetSSHUsername
I1017 19:09:14.957509   87932 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/functional-088611/id_rsa Username:docker}
I1017 19:09:15.075999   87932 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:09:15.138325   87932 main.go:141] libmachine: Making call to close driver server
I1017 19:09:15.138339   87932 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:15.138677   87932 main.go:141] libmachine: (functional-088611) DBG | Closing plugin on server side
I1017 19:09:15.138677   87932 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:15.138702   87932 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:15.138712   87932 main.go:141] libmachine: Making call to close driver server
I1017 19:09:15.138739   87932 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:15.138994   87932 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:15.139010   87932 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-088611 image ls --format yaml --alsologtostderr:
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:9169a5e2990ef94deb85f79540c096f4b9d20958866808df24de0d047750ca81
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-088611
size: "990"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "62706233"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-088611
size: "2372971"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-088611 image ls --format yaml --alsologtostderr:
I1017 19:09:14.517841   87883 out.go:360] Setting OutFile to fd 1 ...
I1017 19:09:14.518094   87883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.518102   87883 out.go:374] Setting ErrFile to fd 2...
I1017 19:09:14.518106   87883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:14.518296   87883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
I1017 19:09:14.518825   87883 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.518914   87883 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:14.519305   87883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.519350   87883 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.533080   87883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
I1017 19:09:14.533653   87883 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.534276   87883 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.534309   87883 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.534701   87883 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.534918   87883 main.go:141] libmachine: (functional-088611) Calling .GetState
I1017 19:09:14.537236   87883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:14.537289   87883 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:14.551134   87883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
I1017 19:09:14.551611   87883 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:14.552144   87883 main.go:141] libmachine: Using API Version  1
I1017 19:09:14.552167   87883 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:14.552554   87883 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:14.552809   87883 main.go:141] libmachine: (functional-088611) Calling .DriverName
I1017 19:09:14.553062   87883 ssh_runner.go:195] Run: systemctl --version
I1017 19:09:14.553101   87883 main.go:141] libmachine: (functional-088611) Calling .GetSSHHostname
I1017 19:09:14.556377   87883 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.556850   87883 main.go:141] libmachine: (functional-088611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:28:02", ip: ""} in network mk-functional-088611: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:44 +0000 UTC Type:0 Mac:52:54:00:09:28:02 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-088611 Clientid:01:52:54:00:09:28:02}
I1017 19:09:14.556881   87883 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined IP address 192.168.39.211 and MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:14.557099   87883 main.go:141] libmachine: (functional-088611) Calling .GetSSHPort
I1017 19:09:14.557269   87883 main.go:141] libmachine: (functional-088611) Calling .GetSSHKeyPath
I1017 19:09:14.557407   87883 main.go:141] libmachine: (functional-088611) Calling .GetSSHUsername
I1017 19:09:14.557612   87883 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/functional-088611/id_rsa Username:docker}
I1017 19:09:14.678527   87883 ssh_runner.go:195] Run: sudo crictl images --output json
I1017 19:09:14.760794   87883 main.go:141] libmachine: Making call to close driver server
I1017 19:09:14.760812   87883 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:14.761119   87883 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:14.761136   87883 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:14.761144   87883 main.go:141] libmachine: Making call to close driver server
I1017 19:09:14.761151   87883 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:14.761390   87883 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:14.761448   87883 main.go:141] libmachine: (functional-088611) DBG | Closing plugin on server side
I1017 19:09:14.761475   87883 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh pgrep buildkitd: exit status 1 (243.595507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image build -t localhost/my-image:functional-088611 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 image build -t localhost/my-image:functional-088611 testdata/build --alsologtostderr: (5.933959266s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-088611 image build -t localhost/my-image:functional-088611 testdata/build --alsologtostderr:
I1017 19:09:15.060326   87961 out.go:360] Setting OutFile to fd 1 ...
I1017 19:09:15.060576   87961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:15.060585   87961 out.go:374] Setting ErrFile to fd 2...
I1017 19:09:15.060590   87961 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 19:09:15.060782   87961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
I1017 19:09:15.061428   87961 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:15.062244   87961 config.go:182] Loaded profile config "functional-088611": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1017 19:09:15.062591   87961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:15.062632   87961 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:15.076658   87961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
I1017 19:09:15.077180   87961 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:15.077796   87961 main.go:141] libmachine: Using API Version  1
I1017 19:09:15.077821   87961 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:15.078286   87961 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:15.078539   87961 main.go:141] libmachine: (functional-088611) Calling .GetState
I1017 19:09:15.080854   87961 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1017 19:09:15.080905   87961 main.go:141] libmachine: Launching plugin server for driver kvm2
I1017 19:09:15.095349   87961 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45389
I1017 19:09:15.095784   87961 main.go:141] libmachine: () Calling .GetVersion
I1017 19:09:15.096334   87961 main.go:141] libmachine: Using API Version  1
I1017 19:09:15.096355   87961 main.go:141] libmachine: () Calling .SetConfigRaw
I1017 19:09:15.096740   87961 main.go:141] libmachine: () Calling .GetMachineName
I1017 19:09:15.096928   87961 main.go:141] libmachine: (functional-088611) Calling .DriverName
I1017 19:09:15.097191   87961 ssh_runner.go:195] Run: systemctl --version
I1017 19:09:15.097222   87961 main.go:141] libmachine: (functional-088611) Calling .GetSSHHostname
I1017 19:09:15.100346   87961 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:15.100893   87961 main.go:141] libmachine: (functional-088611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:28:02", ip: ""} in network mk-functional-088611: {Iface:virbr1 ExpiryTime:2025-10-17 20:05:44 +0000 UTC Type:0 Mac:52:54:00:09:28:02 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:functional-088611 Clientid:01:52:54:00:09:28:02}
I1017 19:09:15.100923   87961 main.go:141] libmachine: (functional-088611) DBG | domain functional-088611 has defined IP address 192.168.39.211 and MAC address 52:54:00:09:28:02 in network mk-functional-088611
I1017 19:09:15.101196   87961 main.go:141] libmachine: (functional-088611) Calling .GetSSHPort
I1017 19:09:15.101388   87961 main.go:141] libmachine: (functional-088611) Calling .GetSSHKeyPath
I1017 19:09:15.101524   87961 main.go:141] libmachine: (functional-088611) Calling .GetSSHUsername
I1017 19:09:15.101710   87961 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/functional-088611/id_rsa Username:docker}
I1017 19:09:15.222906   87961 build_images.go:161] Building image from path: /tmp/build.2371718080.tar
I1017 19:09:15.223009   87961 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 19:09:15.243307   87961 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2371718080.tar
I1017 19:09:15.250449   87961 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2371718080.tar: stat -c "%s %y" /var/lib/minikube/build/build.2371718080.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2371718080.tar': No such file or directory
I1017 19:09:15.250473   87961 ssh_runner.go:362] scp /tmp/build.2371718080.tar --> /var/lib/minikube/build/build.2371718080.tar (3072 bytes)
I1017 19:09:15.311844   87961 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2371718080
I1017 19:09:15.326093   87961 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2371718080 -xf /var/lib/minikube/build/build.2371718080.tar
I1017 19:09:15.343746   87961 containerd.go:394] Building image: /var/lib/minikube/build/build.2371718080
I1017 19:09:15.343861   87961 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2371718080 --local dockerfile=/var/lib/minikube/build/build.2371718080 --output type=image,name=localhost/my-image:functional-088611
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1a4933127948becdea542a3c85fb752ca989d611b4c8ad526e57a7e94da66b45
#8 exporting manifest sha256:1a4933127948becdea542a3c85fb752ca989d611b4c8ad526e57a7e94da66b45 0.0s done
#8 exporting config sha256:d9bb6d8a7867ee9c953375ed389827dd2c1a3c8cda32578af4f3de3096760d61 0.0s done
#8 naming to localhost/my-image:functional-088611 done
#8 DONE 0.2s
I1017 19:09:20.899616   87961 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2371718080 --local dockerfile=/var/lib/minikube/build/build.2371718080 --output type=image,name=localhost/my-image:functional-088611: (5.555720635s)
I1017 19:09:20.899682   87961 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2371718080
I1017 19:09:20.924528   87961 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2371718080.tar
I1017 19:09:20.940458   87961 build_images.go:217] Built localhost/my-image:functional-088611 from /tmp/build.2371718080.tar
I1017 19:09:20.940501   87961 build_images.go:133] succeeded building to: functional-088611
I1017 19:09:20.940506   87961 build_images.go:134] failed building to: 
I1017 19:09:20.940534   87961 main.go:141] libmachine: Making call to close driver server
I1017 19:09:20.940547   87961 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:20.940946   87961 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:20.940992   87961 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:20.941005   87961 main.go:141] libmachine: Making call to close driver server
I1017 19:09:20.941015   87961 main.go:141] libmachine: (functional-088611) Calling .Close
I1017 19:09:20.941317   87961 main.go:141] libmachine: Successfully made call to close driver server
I1017 19:09:20.941336   87961 main.go:141] libmachine: Making call to close connection to plugin binary
I1017 19:09:20.941335   87961 main.go:141] libmachine: (functional-088611) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.45902615s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-088611
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image load --daemon kicbase/echo-server:functional-088611 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 image load --daemon kicbase/echo-server:functional-088611 --alsologtostderr: (1.23099962s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image load --daemon kicbase/echo-server:functional-088611 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.142596033s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-088611
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image load --daemon kicbase/echo-server:functional-088611 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-088611 image load --daemon kicbase/echo-server:functional-088611 --alsologtostderr: (1.028595092s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdspecific-port2731930780/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.292925ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:09:10.684099   78783 retry.go:31] will retry after 327.236848ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdspecific-port2731930780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "sudo umount -f /mount-9p": exit status 1 (241.425619ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-088611 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdspecific-port2731930780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image save kicbase/echo-server:functional-088611 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image rm kicbase/echo-server:functional-088611 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T" /mount1: exit status 1 (289.303105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 19:09:12.464543   78783 retry.go:31] will retry after 679.426112ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-088611 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-088611 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3223749435/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-088611
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-088611 image save --daemon kicbase/echo-server:functional-088611 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-088611
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-088611
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-088611
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-088611
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:10:57.561193   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:11:25.268392   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (3m27.868329264s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 kubectl -- rollout status deployment/busybox: (7.473934516s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-6hfp8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-bpp2r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-zklzb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-6hfp8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-bpp2r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-zklzb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-6hfp8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-bpp2r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-zklzb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-6hfp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-6hfp8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-bpp2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-bpp2r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-zklzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 kubectl -- exec busybox-7b57f96db7-zklzb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node add --alsologtostderr -v 5
E1017 19:13:35.020735   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.027283   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.038717   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.060117   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.101605   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.183063   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.344689   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:35.667167   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:36.308954   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:37.590800   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:40.153141   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:45.274497   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:13:55.515788   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 node add --alsologtostderr -v 5: (48.713212379s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-011961 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp testdata/cp-test.txt ha-011961:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422078464/001/cp-test_ha-011961.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961:/home/docker/cp-test.txt ha-011961-m02:/home/docker/cp-test_ha-011961_ha-011961-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test_ha-011961_ha-011961-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961:/home/docker/cp-test.txt ha-011961-m03:/home/docker/cp-test_ha-011961_ha-011961-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test_ha-011961_ha-011961-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961:/home/docker/cp-test.txt ha-011961-m04:/home/docker/cp-test_ha-011961_ha-011961-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test_ha-011961_ha-011961-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp testdata/cp-test.txt ha-011961-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422078464/001/cp-test_ha-011961-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m02:/home/docker/cp-test.txt ha-011961:/home/docker/cp-test_ha-011961-m02_ha-011961.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test_ha-011961-m02_ha-011961.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m02:/home/docker/cp-test.txt ha-011961-m03:/home/docker/cp-test_ha-011961-m02_ha-011961-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test_ha-011961-m02_ha-011961-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m02:/home/docker/cp-test.txt ha-011961-m04:/home/docker/cp-test_ha-011961-m02_ha-011961-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test_ha-011961-m02_ha-011961-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp testdata/cp-test.txt ha-011961-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422078464/001/cp-test_ha-011961-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m03:/home/docker/cp-test.txt ha-011961:/home/docker/cp-test_ha-011961-m03_ha-011961.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test_ha-011961-m03_ha-011961.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m03:/home/docker/cp-test.txt ha-011961-m02:/home/docker/cp-test_ha-011961-m03_ha-011961-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test_ha-011961-m03_ha-011961-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m03:/home/docker/cp-test.txt ha-011961-m04:/home/docker/cp-test_ha-011961-m03_ha-011961-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test_ha-011961-m03_ha-011961-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp testdata/cp-test.txt ha-011961-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1422078464/001/cp-test_ha-011961-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m04:/home/docker/cp-test.txt ha-011961:/home/docker/cp-test_ha-011961-m04_ha-011961.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961 "sudo cat /home/docker/cp-test_ha-011961-m04_ha-011961.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m04:/home/docker/cp-test.txt ha-011961-m02:/home/docker/cp-test_ha-011961-m04_ha-011961-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m02 "sudo cat /home/docker/cp-test_ha-011961-m04_ha-011961-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 cp ha-011961-m04:/home/docker/cp-test.txt ha-011961-m03:/home/docker/cp-test_ha-011961-m04_ha-011961-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 ssh -n ha-011961-m03 "sudo cat /home/docker/cp-test_ha-011961-m04_ha-011961-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (90.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node stop m02 --alsologtostderr -v 5
E1017 19:14:15.997198   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:14:56.960184   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 node stop m02 --alsologtostderr -v 5: (1m29.874403955s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5: exit status 7 (705.91499ms)

                                                
                                                
-- stdout --
	ha-011961
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011961-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011961-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011961-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:15:41.210498   92623 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:15:41.210785   92623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:15:41.210795   92623 out.go:374] Setting ErrFile to fd 2...
	I1017 19:15:41.210799   92623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:15:41.211060   92623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:15:41.211240   92623 out.go:368] Setting JSON to false
	I1017 19:15:41.211270   92623 mustload.go:65] Loading cluster: ha-011961
	I1017 19:15:41.211425   92623 notify.go:220] Checking for updates...
	I1017 19:15:41.211665   92623 config.go:182] Loaded profile config "ha-011961": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:15:41.211684   92623 status.go:174] checking status of ha-011961 ...
	I1017 19:15:41.212176   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.212219   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.234352   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1017 19:15:41.234882   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.235502   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.235529   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.236016   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.236286   92623 main.go:141] libmachine: (ha-011961) Calling .GetState
	I1017 19:15:41.238617   92623 status.go:371] ha-011961 host status = "Running" (err=<nil>)
	I1017 19:15:41.238637   92623 host.go:66] Checking if "ha-011961" exists ...
	I1017 19:15:41.238998   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.239060   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.252941   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44121
	I1017 19:15:41.253410   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.253890   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.253911   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.254296   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.254568   92623 main.go:141] libmachine: (ha-011961) Calling .GetIP
	I1017 19:15:41.258514   92623 main.go:141] libmachine: (ha-011961) DBG | domain ha-011961 has defined MAC address 52:54:00:2e:19:fb in network mk-ha-011961
	I1017 19:15:41.259160   92623 main.go:141] libmachine: (ha-011961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:19:fb", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:09:42 +0000 UTC Type:0 Mac:52:54:00:2e:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-011961 Clientid:01:52:54:00:2e:19:fb}
	I1017 19:15:41.259208   92623 main.go:141] libmachine: (ha-011961) DBG | domain ha-011961 has defined IP address 192.168.39.220 and MAC address 52:54:00:2e:19:fb in network mk-ha-011961
	I1017 19:15:41.259406   92623 host.go:66] Checking if "ha-011961" exists ...
	I1017 19:15:41.259736   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.259779   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.273818   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42373
	I1017 19:15:41.274284   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.274756   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.274781   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.275222   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.275422   92623 main.go:141] libmachine: (ha-011961) Calling .DriverName
	I1017 19:15:41.275624   92623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:15:41.275648   92623 main.go:141] libmachine: (ha-011961) Calling .GetSSHHostname
	I1017 19:15:41.278746   92623 main.go:141] libmachine: (ha-011961) DBG | domain ha-011961 has defined MAC address 52:54:00:2e:19:fb in network mk-ha-011961
	I1017 19:15:41.279373   92623 main.go:141] libmachine: (ha-011961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:19:fb", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:09:42 +0000 UTC Type:0 Mac:52:54:00:2e:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-011961 Clientid:01:52:54:00:2e:19:fb}
	I1017 19:15:41.279400   92623 main.go:141] libmachine: (ha-011961) DBG | domain ha-011961 has defined IP address 192.168.39.220 and MAC address 52:54:00:2e:19:fb in network mk-ha-011961
	I1017 19:15:41.279543   92623 main.go:141] libmachine: (ha-011961) Calling .GetSSHPort
	I1017 19:15:41.279762   92623 main.go:141] libmachine: (ha-011961) Calling .GetSSHKeyPath
	I1017 19:15:41.279908   92623 main.go:141] libmachine: (ha-011961) Calling .GetSSHUsername
	I1017 19:15:41.280061   92623 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/ha-011961/id_rsa Username:docker}
	I1017 19:15:41.373011   92623 ssh_runner.go:195] Run: systemctl --version
	I1017 19:15:41.380547   92623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:15:41.404078   92623 kubeconfig.go:125] found "ha-011961" server: "https://192.168.39.254:8443"
	I1017 19:15:41.404120   92623 api_server.go:166] Checking apiserver status ...
	I1017 19:15:41.404169   92623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:15:41.429518   92623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W1017 19:15:41.444117   92623 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:15:41.444188   92623 ssh_runner.go:195] Run: ls
	I1017 19:15:41.450184   92623 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:15:41.457416   92623 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:15:41.457447   92623 status.go:463] ha-011961 apiserver status = Running (err=<nil>)
	I1017 19:15:41.457458   92623 status.go:176] ha-011961 status: &{Name:ha-011961 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:15:41.457476   92623 status.go:174] checking status of ha-011961-m02 ...
	I1017 19:15:41.457914   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.457967   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.472109   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I1017 19:15:41.472728   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.473319   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.473341   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.473666   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.473886   92623 main.go:141] libmachine: (ha-011961-m02) Calling .GetState
	I1017 19:15:41.475621   92623 status.go:371] ha-011961-m02 host status = "Stopped" (err=<nil>)
	I1017 19:15:41.475637   92623 status.go:384] host is not running, skipping remaining checks
	I1017 19:15:41.475643   92623 status.go:176] ha-011961-m02 status: &{Name:ha-011961-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:15:41.475660   92623 status.go:174] checking status of ha-011961-m03 ...
	I1017 19:15:41.475961   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.476016   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.489457   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46757
	I1017 19:15:41.490029   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.490518   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.490541   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.490952   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.491179   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetState
	I1017 19:15:41.493034   92623 status.go:371] ha-011961-m03 host status = "Running" (err=<nil>)
	I1017 19:15:41.493053   92623 host.go:66] Checking if "ha-011961-m03" exists ...
	I1017 19:15:41.493342   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.493379   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.506613   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I1017 19:15:41.507173   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.507685   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.507710   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.508075   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.508262   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetIP
	I1017 19:15:41.511072   92623 main.go:141] libmachine: (ha-011961-m03) DBG | domain ha-011961-m03 has defined MAC address 52:54:00:6d:76:3f in network mk-ha-011961
	I1017 19:15:41.511519   92623 main.go:141] libmachine: (ha-011961-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:76:3f", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:11:45 +0000 UTC Type:0 Mac:52:54:00:6d:76:3f Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-011961-m03 Clientid:01:52:54:00:6d:76:3f}
	I1017 19:15:41.511547   92623 main.go:141] libmachine: (ha-011961-m03) DBG | domain ha-011961-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:6d:76:3f in network mk-ha-011961
	I1017 19:15:41.511754   92623 host.go:66] Checking if "ha-011961-m03" exists ...
	I1017 19:15:41.512157   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.512203   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.525945   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36527
	I1017 19:15:41.526417   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.526876   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.526904   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.527265   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.527469   92623 main.go:141] libmachine: (ha-011961-m03) Calling .DriverName
	I1017 19:15:41.527718   92623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:15:41.527743   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetSSHHostname
	I1017 19:15:41.530946   92623 main.go:141] libmachine: (ha-011961-m03) DBG | domain ha-011961-m03 has defined MAC address 52:54:00:6d:76:3f in network mk-ha-011961
	I1017 19:15:41.531504   92623 main.go:141] libmachine: (ha-011961-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:76:3f", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:11:45 +0000 UTC Type:0 Mac:52:54:00:6d:76:3f Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-011961-m03 Clientid:01:52:54:00:6d:76:3f}
	I1017 19:15:41.531533   92623 main.go:141] libmachine: (ha-011961-m03) DBG | domain ha-011961-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:6d:76:3f in network mk-ha-011961
	I1017 19:15:41.531663   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetSSHPort
	I1017 19:15:41.531843   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetSSHKeyPath
	I1017 19:15:41.531944   92623 main.go:141] libmachine: (ha-011961-m03) Calling .GetSSHUsername
	I1017 19:15:41.532072   92623 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/ha-011961-m03/id_rsa Username:docker}
	I1017 19:15:41.624343   92623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:15:41.646450   92623 kubeconfig.go:125] found "ha-011961" server: "https://192.168.39.254:8443"
	I1017 19:15:41.646483   92623 api_server.go:166] Checking apiserver status ...
	I1017 19:15:41.646518   92623 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:15:41.669315   92623 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup
	W1017 19:15:41.681356   92623 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1551/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:15:41.681440   92623 ssh_runner.go:195] Run: ls
	I1017 19:15:41.686998   92623 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1017 19:15:41.692069   92623 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1017 19:15:41.692094   92623 status.go:463] ha-011961-m03 apiserver status = Running (err=<nil>)
	I1017 19:15:41.692103   92623 status.go:176] ha-011961-m03 status: &{Name:ha-011961-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:15:41.692130   92623 status.go:174] checking status of ha-011961-m04 ...
	I1017 19:15:41.692431   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.692464   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.706019   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1017 19:15:41.706470   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.706951   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.706972   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.707399   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.707627   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetState
	I1017 19:15:41.709520   92623 status.go:371] ha-011961-m04 host status = "Running" (err=<nil>)
	I1017 19:15:41.709536   92623 host.go:66] Checking if "ha-011961-m04" exists ...
	I1017 19:15:41.709838   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.709885   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.725113   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I1017 19:15:41.725622   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.726129   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.726199   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.726656   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.726880   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetIP
	I1017 19:15:41.730524   92623 main.go:141] libmachine: (ha-011961-m04) DBG | domain ha-011961-m04 has defined MAC address 52:54:00:b2:9d:de in network mk-ha-011961
	I1017 19:15:41.731081   92623 main.go:141] libmachine: (ha-011961-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:9d:de", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:13:23 +0000 UTC Type:0 Mac:52:54:00:b2:9d:de Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-011961-m04 Clientid:01:52:54:00:b2:9d:de}
	I1017 19:15:41.731112   92623 main.go:141] libmachine: (ha-011961-m04) DBG | domain ha-011961-m04 has defined IP address 192.168.39.35 and MAC address 52:54:00:b2:9d:de in network mk-ha-011961
	I1017 19:15:41.731326   92623 host.go:66] Checking if "ha-011961-m04" exists ...
	I1017 19:15:41.731635   92623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:15:41.731677   92623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:15:41.746070   92623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I1017 19:15:41.746506   92623 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:15:41.746999   92623 main.go:141] libmachine: Using API Version  1
	I1017 19:15:41.747022   92623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:15:41.747385   92623 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:15:41.747576   92623 main.go:141] libmachine: (ha-011961-m04) Calling .DriverName
	I1017 19:15:41.747784   92623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:15:41.747808   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetSSHHostname
	I1017 19:15:41.750937   92623 main.go:141] libmachine: (ha-011961-m04) DBG | domain ha-011961-m04 has defined MAC address 52:54:00:b2:9d:de in network mk-ha-011961
	I1017 19:15:41.751384   92623 main.go:141] libmachine: (ha-011961-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:9d:de", ip: ""} in network mk-ha-011961: {Iface:virbr1 ExpiryTime:2025-10-17 20:13:23 +0000 UTC Type:0 Mac:52:54:00:b2:9d:de Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:ha-011961-m04 Clientid:01:52:54:00:b2:9d:de}
	I1017 19:15:41.751410   92623 main.go:141] libmachine: (ha-011961-m04) DBG | domain ha-011961-m04 has defined IP address 192.168.39.35 and MAC address 52:54:00:b2:9d:de in network mk-ha-011961
	I1017 19:15:41.751562   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetSSHPort
	I1017 19:15:41.751717   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetSSHKeyPath
	I1017 19:15:41.751935   92623 main.go:141] libmachine: (ha-011961-m04) Calling .GetSSHUsername
	I1017 19:15:41.752093   92623 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/ha-011961-m04/id_rsa Username:docker}
	I1017 19:15:41.843430   92623 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:15:41.864406   92623 status.go:176] ha-011961-m04 status: &{Name:ha-011961-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (90.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node start m02 --alsologtostderr -v 5
E1017 19:15:57.561029   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 node start m02 --alsologtostderr -v 5: (28.913494407s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5: (1.121148622s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.029866876s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 stop --alsologtostderr -v 5
E1017 19:16:18.881573   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:18:35.026906   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:19:02.727158   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 stop --alsologtostderr -v 5: (4m13.480311534s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 start --wait true --alsologtostderr -v 5
E1017 19:20:57.562089   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:22:20.630291   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 start --wait true --alsologtostderr -v 5: (1m59.331437454s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 node delete m03 --alsologtostderr -v 5: (6.425286725s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (264.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 stop --alsologtostderr -v 5
E1017 19:23:35.021269   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:25:57.562617   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 stop --alsologtostderr -v 5: (4m23.959560372s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5: exit status 7 (118.174363ms)

                                                
                                                
-- stdout --
	ha-011961
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011961-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011961-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:26:58.548291   96745 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:26:58.548545   96745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:26:58.548553   96745 out.go:374] Setting ErrFile to fd 2...
	I1017 19:26:58.548556   96745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:26:58.548733   96745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:26:58.548929   96745 out.go:368] Setting JSON to false
	I1017 19:26:58.548958   96745 mustload.go:65] Loading cluster: ha-011961
	I1017 19:26:58.549021   96745 notify.go:220] Checking for updates...
	I1017 19:26:58.549352   96745 config.go:182] Loaded profile config "ha-011961": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:26:58.549367   96745 status.go:174] checking status of ha-011961 ...
	I1017 19:26:58.549786   96745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:26:58.549826   96745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:26:58.573160   96745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43877
	I1017 19:26:58.573759   96745 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:26:58.574411   96745 main.go:141] libmachine: Using API Version  1
	I1017 19:26:58.574437   96745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:26:58.574839   96745 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:26:58.575116   96745 main.go:141] libmachine: (ha-011961) Calling .GetState
	I1017 19:26:58.577604   96745 status.go:371] ha-011961 host status = "Stopped" (err=<nil>)
	I1017 19:26:58.577623   96745 status.go:384] host is not running, skipping remaining checks
	I1017 19:26:58.577630   96745 status.go:176] ha-011961 status: &{Name:ha-011961 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:26:58.577663   96745 status.go:174] checking status of ha-011961-m02 ...
	I1017 19:26:58.578145   96745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:26:58.578208   96745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:26:58.592265   96745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I1017 19:26:58.592734   96745 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:26:58.593217   96745 main.go:141] libmachine: Using API Version  1
	I1017 19:26:58.593236   96745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:26:58.593594   96745 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:26:58.593767   96745 main.go:141] libmachine: (ha-011961-m02) Calling .GetState
	I1017 19:26:58.595518   96745 status.go:371] ha-011961-m02 host status = "Stopped" (err=<nil>)
	I1017 19:26:58.595532   96745 status.go:384] host is not running, skipping remaining checks
	I1017 19:26:58.595538   96745 status.go:176] ha-011961-m02 status: &{Name:ha-011961-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:26:58.595563   96745 status.go:174] checking status of ha-011961-m04 ...
	I1017 19:26:58.595885   96745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:26:58.595919   96745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:26:58.609338   96745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I1017 19:26:58.609844   96745 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:26:58.610371   96745 main.go:141] libmachine: Using API Version  1
	I1017 19:26:58.610395   96745 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:26:58.610746   96745 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:26:58.610927   96745 main.go:141] libmachine: (ha-011961-m04) Calling .GetState
	I1017 19:26:58.612644   96745 status.go:371] ha-011961-m04 host status = "Stopped" (err=<nil>)
	I1017 19:26:58.612663   96745 status.go:384] host is not running, skipping remaining checks
	I1017 19:26:58.612668   96745 status.go:176] ha-011961-m04 status: &{Name:ha-011961-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (264.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:28:35.021180   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m56.201730794s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 node add --control-plane --alsologtostderr -v 5
E1017 19:29:58.089248   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-011961 node add --control-plane --alsologtostderr -v 5: (1m23.326848104s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-011961 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-771909 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:30:57.561815   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-771909 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m27.641433965s)
--- PASS: TestJSONOutput/start/Command (87.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-771909 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-771909 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-771909 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-771909 --output=json --user=testUser: (1.904534787s)
--- PASS: TestJSONOutput/stop/Command (1.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-534801 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-534801 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (63.362132ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee69ef31-b474-4692-8251-9e4b137ea5c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-534801] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"697dda7b-eab3-48d8-898c-7ad8f753fa3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21753"}}
	{"specversion":"1.0","id":"e4481e18-fc7a-4e05-b675-067e44d256b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74bbd769-63c3-4609-b726-69149fbc4d9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig"}}
	{"specversion":"1.0","id":"a8f17eb0-7371-4022-a559-834dc57f68b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube"}}
	{"specversion":"1.0","id":"0c29b358-bad9-45c8-9e90-c982a3c31e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"37c19843-cbce-4086-ae9e-50106506b771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1ac08fbe-66a0-4179-a1cc-fa5c2e03867d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-534801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-534801
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-097587 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-097587 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (42.989101593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-099790 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-099790 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (41.79834147s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-097587
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-099790
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-099790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-099790
helpers_test.go:175: Cleaning up "first-097587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-097587
--- PASS: TestMinikubeProfile (87.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-184883 --memory=3072 --mount-string /tmp/TestMountStartserial2725802880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:33:35.027100   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-184883 --memory=3072 --mount-string /tmp/TestMountStartserial2725802880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (23.951546925s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-184883 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-184883 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-199661 --memory=3072 --mount-string /tmp/TestMountStartserial2725802880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-199661 --memory=3072 --mount-string /tmp/TestMountStartserial2725802880/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (21.208776976s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-184883 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-199661
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-199661: (1.349546352s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-199661
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-199661: (20.523557785s)
--- PASS: TestMountStart/serial/RestartStopped (21.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199661 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-318791 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:35:57.561014   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-318791 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m46.311214007s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-318791 -- rollout status deployment/busybox: (5.464323202s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-46crg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-4xl5q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-46crg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-4xl5q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-46crg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-4xl5q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-46crg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-46crg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-4xl5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-318791 -- exec busybox-7b57f96db7-4xl5q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-318791 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-318791 -v=5 --alsologtostderr: (45.080891755s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-318791 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp testdata/cp-test.txt multinode-318791:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1143731490/001/cp-test_multinode-318791.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791:/home/docker/cp-test.txt multinode-318791-m02:/home/docker/cp-test_multinode-318791_multinode-318791-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test_multinode-318791_multinode-318791-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791:/home/docker/cp-test.txt multinode-318791-m03:/home/docker/cp-test_multinode-318791_multinode-318791-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test_multinode-318791_multinode-318791-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp testdata/cp-test.txt multinode-318791-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1143731490/001/cp-test_multinode-318791-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m02:/home/docker/cp-test.txt multinode-318791:/home/docker/cp-test_multinode-318791-m02_multinode-318791.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test_multinode-318791-m02_multinode-318791.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m02:/home/docker/cp-test.txt multinode-318791-m03:/home/docker/cp-test_multinode-318791-m02_multinode-318791-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test_multinode-318791-m02_multinode-318791-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp testdata/cp-test.txt multinode-318791-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1143731490/001/cp-test_multinode-318791-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m03:/home/docker/cp-test.txt multinode-318791:/home/docker/cp-test_multinode-318791-m03_multinode-318791.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791 "sudo cat /home/docker/cp-test_multinode-318791-m03_multinode-318791.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 cp multinode-318791-m03:/home/docker/cp-test.txt multinode-318791-m02:/home/docker/cp-test_multinode-318791-m03_multinode-318791-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 ssh -n multinode-318791-m02 "sudo cat /home/docker/cp-test_multinode-318791-m03_multinode-318791-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-318791 node stop m03: (1.517760724s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-318791 status: exit status 7 (449.315059ms)

                                                
                                                
-- stdout --
	multinode-318791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-318791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-318791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr: exit status 7 (440.179798ms)

                                                
                                                
-- stdout --
	multinode-318791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-318791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-318791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:37:27.727117  104447 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:37:27.727348  104447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:37:27.727357  104447 out.go:374] Setting ErrFile to fd 2...
	I1017 19:37:27.727361  104447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:37:27.727567  104447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:37:27.727741  104447 out.go:368] Setting JSON to false
	I1017 19:37:27.727769  104447 mustload.go:65] Loading cluster: multinode-318791
	I1017 19:37:27.727885  104447 notify.go:220] Checking for updates...
	I1017 19:37:27.728127  104447 config.go:182] Loaded profile config "multinode-318791": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:37:27.728141  104447 status.go:174] checking status of multinode-318791 ...
	I1017 19:37:27.728553  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.728597  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.743224  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I1017 19:37:27.743815  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.744400  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.744444  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:27.744941  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:27.745166  104447 main.go:141] libmachine: (multinode-318791) Calling .GetState
	I1017 19:37:27.747252  104447 status.go:371] multinode-318791 host status = "Running" (err=<nil>)
	I1017 19:37:27.747269  104447 host.go:66] Checking if "multinode-318791" exists ...
	I1017 19:37:27.747563  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.747619  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.762689  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
	I1017 19:37:27.763111  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.763643  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.763686  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:27.764074  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:27.764268  104447 main.go:141] libmachine: (multinode-318791) Calling .GetIP
	I1017 19:37:27.767516  104447 main.go:141] libmachine: (multinode-318791) DBG | domain multinode-318791 has defined MAC address 52:54:00:aa:f4:b6 in network mk-multinode-318791
	I1017 19:37:27.768044  104447 main.go:141] libmachine: (multinode-318791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:f4:b6", ip: ""} in network mk-multinode-318791: {Iface:virbr1 ExpiryTime:2025-10-17 20:34:53 +0000 UTC Type:0 Mac:52:54:00:aa:f4:b6 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-318791 Clientid:01:52:54:00:aa:f4:b6}
	I1017 19:37:27.768075  104447 main.go:141] libmachine: (multinode-318791) DBG | domain multinode-318791 has defined IP address 192.168.39.64 and MAC address 52:54:00:aa:f4:b6 in network mk-multinode-318791
	I1017 19:37:27.768241  104447 host.go:66] Checking if "multinode-318791" exists ...
	I1017 19:37:27.768533  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.768591  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.782509  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I1017 19:37:27.783052  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.783547  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.783576  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:27.783907  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:27.784160  104447 main.go:141] libmachine: (multinode-318791) Calling .DriverName
	I1017 19:37:27.784382  104447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:37:27.784408  104447 main.go:141] libmachine: (multinode-318791) Calling .GetSSHHostname
	I1017 19:37:27.787695  104447 main.go:141] libmachine: (multinode-318791) DBG | domain multinode-318791 has defined MAC address 52:54:00:aa:f4:b6 in network mk-multinode-318791
	I1017 19:37:27.788212  104447 main.go:141] libmachine: (multinode-318791) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:f4:b6", ip: ""} in network mk-multinode-318791: {Iface:virbr1 ExpiryTime:2025-10-17 20:34:53 +0000 UTC Type:0 Mac:52:54:00:aa:f4:b6 Iaid: IPaddr:192.168.39.64 Prefix:24 Hostname:multinode-318791 Clientid:01:52:54:00:aa:f4:b6}
	I1017 19:37:27.788237  104447 main.go:141] libmachine: (multinode-318791) DBG | domain multinode-318791 has defined IP address 192.168.39.64 and MAC address 52:54:00:aa:f4:b6 in network mk-multinode-318791
	I1017 19:37:27.788450  104447 main.go:141] libmachine: (multinode-318791) Calling .GetSSHPort
	I1017 19:37:27.788650  104447 main.go:141] libmachine: (multinode-318791) Calling .GetSSHKeyPath
	I1017 19:37:27.788820  104447 main.go:141] libmachine: (multinode-318791) Calling .GetSSHUsername
	I1017 19:37:27.788969  104447 sshutil.go:53] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/multinode-318791/id_rsa Username:docker}
	I1017 19:37:27.877851  104447 ssh_runner.go:195] Run: systemctl --version
	I1017 19:37:27.884311  104447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:37:27.900539  104447 kubeconfig.go:125] found "multinode-318791" server: "https://192.168.39.64:8443"
	I1017 19:37:27.900586  104447 api_server.go:166] Checking apiserver status ...
	I1017 19:37:27.900623  104447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:37:27.920991  104447 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W1017 19:37:27.933128  104447 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1017 19:37:27.933181  104447 ssh_runner.go:195] Run: ls
	I1017 19:37:27.938430  104447 api_server.go:253] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
	I1017 19:37:27.946703  104447 api_server.go:279] https://192.168.39.64:8443/healthz returned 200:
	ok
	I1017 19:37:27.946728  104447 status.go:463] multinode-318791 apiserver status = Running (err=<nil>)
	I1017 19:37:27.946742  104447 status.go:176] multinode-318791 status: &{Name:multinode-318791 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:37:27.946774  104447 status.go:174] checking status of multinode-318791-m02 ...
	I1017 19:37:27.947095  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.947141  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.961362  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I1017 19:37:27.961888  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.962402  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.962428  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:27.962834  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:27.963035  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetState
	I1017 19:37:27.964902  104447 status.go:371] multinode-318791-m02 host status = "Running" (err=<nil>)
	I1017 19:37:27.964921  104447 host.go:66] Checking if "multinode-318791-m02" exists ...
	I1017 19:37:27.965268  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.965314  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.978692  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35129
	I1017 19:37:27.979344  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.979843  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.979859  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:27.980229  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:27.980416  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetIP
	I1017 19:37:27.983210  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | domain multinode-318791-m02 has defined MAC address 52:54:00:52:d0:9f in network mk-multinode-318791
	I1017 19:37:27.983746  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:d0:9f", ip: ""} in network mk-multinode-318791: {Iface:virbr1 ExpiryTime:2025-10-17 20:35:52 +0000 UTC Type:0 Mac:52:54:00:52:d0:9f Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-318791-m02 Clientid:01:52:54:00:52:d0:9f}
	I1017 19:37:27.983777  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | domain multinode-318791-m02 has defined IP address 192.168.39.221 and MAC address 52:54:00:52:d0:9f in network mk-multinode-318791
	I1017 19:37:27.983968  104447 host.go:66] Checking if "multinode-318791-m02" exists ...
	I1017 19:37:27.984404  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:27.984458  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:27.998729  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I1017 19:37:27.999347  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:27.999926  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:27.999958  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:28.000374  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:28.000579  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .DriverName
	I1017 19:37:28.000772  104447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:37:28.000801  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetSSHHostname
	I1017 19:37:28.003894  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | domain multinode-318791-m02 has defined MAC address 52:54:00:52:d0:9f in network mk-multinode-318791
	I1017 19:37:28.004513  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:d0:9f", ip: ""} in network mk-multinode-318791: {Iface:virbr1 ExpiryTime:2025-10-17 20:35:52 +0000 UTC Type:0 Mac:52:54:00:52:d0:9f Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-318791-m02 Clientid:01:52:54:00:52:d0:9f}
	I1017 19:37:28.004543  104447 main.go:141] libmachine: (multinode-318791-m02) DBG | domain multinode-318791-m02 has defined IP address 192.168.39.221 and MAC address 52:54:00:52:d0:9f in network mk-multinode-318791
	I1017 19:37:28.004700  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetSSHPort
	I1017 19:37:28.004868  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetSSHKeyPath
	I1017 19:37:28.005041  104447 main.go:141] libmachine: (multinode-318791-m02) Calling .GetSSHUsername
	I1017 19:37:28.005191  104447 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21753-74819/.minikube/machines/multinode-318791-m02/id_rsa Username:docker}
	I1017 19:37:28.083335  104447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:37:28.101292  104447 status.go:176] multinode-318791-m02 status: &{Name:multinode-318791-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:37:28.101327  104447 status.go:174] checking status of multinode-318791-m03 ...
	I1017 19:37:28.101640  104447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:37:28.101689  104447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:37:28.116223  104447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I1017 19:37:28.116668  104447 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:37:28.117151  104447 main.go:141] libmachine: Using API Version  1
	I1017 19:37:28.117179  104447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:37:28.117519  104447 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:37:28.117730  104447 main.go:141] libmachine: (multinode-318791-m03) Calling .GetState
	I1017 19:37:28.119673  104447 status.go:371] multinode-318791-m03 host status = "Stopped" (err=<nil>)
	I1017 19:37:28.119688  104447 status.go:384] host is not running, skipping remaining checks
	I1017 19:37:28.119695  104447 status.go:176] multinode-318791-m03 status: &{Name:multinode-318791-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-318791 node start m03 -v=5 --alsologtostderr: (34.708418348s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (282.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-318791
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-318791
E1017 19:38:35.027002   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:39:00.634189   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-318791: (2m45.766215503s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-318791 --wait=true -v=5 --alsologtostderr
E1017 19:40:57.561801   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-318791 --wait=true -v=5 --alsologtostderr: (1m56.828192317s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-318791
--- PASS: TestMultiNode/serial/RestartKeepsNodes (282.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-318791 node delete m03: (1.661085439s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (152.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 stop
E1017 19:43:35.020185   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-318791 stop: (2m32.29643613s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-318791 status: exit status 7 (94.723214ms)

                                                
                                                
-- stdout --
	multinode-318791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-318791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr: exit status 7 (89.602823ms)

                                                
                                                
-- stdout --
	multinode-318791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-318791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:45:20.852144  107039 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:45:20.852433  107039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:45:20.852443  107039 out.go:374] Setting ErrFile to fd 2...
	I1017 19:45:20.852447  107039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:45:20.852700  107039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:45:20.852925  107039 out.go:368] Setting JSON to false
	I1017 19:45:20.852958  107039 mustload.go:65] Loading cluster: multinode-318791
	I1017 19:45:20.853024  107039 notify.go:220] Checking for updates...
	I1017 19:45:20.853437  107039 config.go:182] Loaded profile config "multinode-318791": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:45:20.853453  107039 status.go:174] checking status of multinode-318791 ...
	I1017 19:45:20.853878  107039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:45:20.853968  107039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:45:20.873313  107039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I1017 19:45:20.873793  107039 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:45:20.874337  107039 main.go:141] libmachine: Using API Version  1
	I1017 19:45:20.874366  107039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:45:20.874732  107039 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:45:20.874952  107039 main.go:141] libmachine: (multinode-318791) Calling .GetState
	I1017 19:45:20.876773  107039 status.go:371] multinode-318791 host status = "Stopped" (err=<nil>)
	I1017 19:45:20.876786  107039 status.go:384] host is not running, skipping remaining checks
	I1017 19:45:20.876801  107039 status.go:176] multinode-318791 status: &{Name:multinode-318791 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 19:45:20.876838  107039 status.go:174] checking status of multinode-318791-m02 ...
	I1017 19:45:20.877170  107039 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1017 19:45:20.877208  107039 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1017 19:45:20.890953  107039 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I1017 19:45:20.891406  107039 main.go:141] libmachine: () Calling .GetVersion
	I1017 19:45:20.891919  107039 main.go:141] libmachine: Using API Version  1
	I1017 19:45:20.891939  107039 main.go:141] libmachine: () Calling .SetConfigRaw
	I1017 19:45:20.892353  107039 main.go:141] libmachine: () Calling .GetMachineName
	I1017 19:45:20.892575  107039 main.go:141] libmachine: (multinode-318791-m02) Calling .GetState
	I1017 19:45:20.894277  107039 status.go:371] multinode-318791-m02 host status = "Stopped" (err=<nil>)
	I1017 19:45:20.894294  107039 status.go:384] host is not running, skipping remaining checks
	I1017 19:45:20.894301  107039 status.go:176] multinode-318791-m02 status: &{Name:multinode-318791-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (152.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-318791 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:45:57.561170   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 19:46:38.090802   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-318791 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m21.34078801s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-318791 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-318791
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-318791-m02 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-318791-m02 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (64.953052ms)

                                                
                                                
-- stdout --
	* [multinode-318791-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-318791-m02' is duplicated with machine name 'multinode-318791-m02' in profile 'multinode-318791'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-318791-m03 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-318791-m03 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (43.041974543s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-318791
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-318791: exit status 80 (240.111046ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-318791 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-318791-m03 already exists in multinode-318791-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-318791-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.24s)

                                                
                                    
x
+
TestPreload (134.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-811593 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-811593 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m5.6092724s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-811593 image pull gcr.io/k8s-minikube/busybox
E1017 19:48:35.020321   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-811593 image pull gcr.io/k8s-minikube/busybox: (4.51166074s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-811593
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-811593: (6.985113592s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-811593 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-811593 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (56.54482177s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-811593 image list
helpers_test.go:175: Cleaning up "test-preload-811593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-811593
--- PASS: TestPreload (134.77s)

                                                
                                    
x
+
TestScheduledStopUnix (113.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-135897 --memory=3072 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-135897 --memory=3072 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (41.453257549s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135897 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-135897 -n scheduled-stop-135897
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135897 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 19:50:25.172064   78783 retry.go:31] will retry after 111.964µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.173249   78783 retry.go:31] will retry after 128.183µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.174396   78783 retry.go:31] will retry after 114.073µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.175529   78783 retry.go:31] will retry after 458.695µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.176661   78783 retry.go:31] will retry after 473.895µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.177801   78783 retry.go:31] will retry after 748.161µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.178930   78783 retry.go:31] will retry after 571.633µs: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.180064   78783 retry.go:31] will retry after 1.906669ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.182282   78783 retry.go:31] will retry after 1.297181ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.184477   78783 retry.go:31] will retry after 5.68263ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.190664   78783 retry.go:31] will retry after 4.408473ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.195892   78783 retry.go:31] will retry after 10.851141ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.207123   78783 retry.go:31] will retry after 10.150809ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.218399   78783 retry.go:31] will retry after 11.664033ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
I1017 19:50:25.230697   78783 retry.go:31] will retry after 41.32957ms: open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/scheduled-stop-135897/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135897 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135897 -n scheduled-stop-135897
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-135897
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-135897 --schedule 15s
E1017 19:50:57.562396   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-135897
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-135897: exit status 7 (74.943456ms)

                                                
                                                
-- stdout --
	scheduled-stop-135897
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135897 -n scheduled-stop-135897
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-135897 -n scheduled-stop-135897: exit status 7 (66.857404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-135897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-135897
--- PASS: TestScheduledStopUnix (113.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (154.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1017 19:53:35.020476   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2243204240 start -p running-upgrade-880775 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2243204240 start -p running-upgrade-880775 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m48.499183043s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-880775 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:55:40.635534   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-880775 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (41.388185461s)
helpers_test.go:175: Cleaning up "running-upgrade-880775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-880775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-880775: (1.080038722s)
--- PASS: TestRunningBinaryUpgrade (154.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (144.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (42.288981008s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-789581
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-789581: (1.862815931s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-789581 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-789581 status --format={{.Host}}: exit status 7 (78.866072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m4.970923088s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-789581 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 106 (90.883717ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-789581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-789581
	    minikube start -p kubernetes-upgrade-789581 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7895812 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-789581 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:55:57.562153   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-789581 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (34.249000955s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-789581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-789581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-789581: (1.244753367s)
--- PASS: TestKubernetesUpgrade (144.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (87.070165ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-003594] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-003594 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-003594 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m25.498144868s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-003594 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-680935 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-680935 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: exit status 14 (113.676357ms)

                                                
                                                
-- stdout --
	* [false-680935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21753
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 19:51:39.665647  111096 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:51:39.665901  111096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:51:39.665911  111096 out.go:374] Setting ErrFile to fd 2...
	I1017 19:51:39.665916  111096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:51:39.666140  111096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21753-74819/.minikube/bin
	I1017 19:51:39.666676  111096 out.go:368] Setting JSON to false
	I1017 19:51:39.667656  111096 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9240,"bootTime":1760721460,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1017 19:51:39.667759  111096 start.go:141] virtualization: kvm guest
	I1017 19:51:39.669721  111096 out.go:179] * [false-680935] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1017 19:51:39.670925  111096 notify.go:220] Checking for updates...
	I1017 19:51:39.670943  111096 out.go:179]   - MINIKUBE_LOCATION=21753
	I1017 19:51:39.672186  111096 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:51:39.673558  111096 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21753-74819/kubeconfig
	I1017 19:51:39.674801  111096 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21753-74819/.minikube
	I1017 19:51:39.676326  111096 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1017 19:51:39.677666  111096 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:51:39.679372  111096 config.go:182] Loaded profile config "NoKubernetes-003594": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:51:39.679525  111096 config.go:182] Loaded profile config "force-systemd-env-007257": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:51:39.679681  111096 config.go:182] Loaded profile config "offline-containerd-961222": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1017 19:51:39.679811  111096 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:51:39.716441  111096 out.go:179] * Using the kvm2 driver based on user configuration
	I1017 19:51:39.717769  111096 start.go:305] selected driver: kvm2
	I1017 19:51:39.717784  111096 start.go:925] validating driver "kvm2" against <nil>
	I1017 19:51:39.717795  111096 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:51:39.719793  111096 out.go:203] 
	W1017 19:51:39.720847  111096 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1017 19:51:39.721999  111096 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-680935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-680935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-680935"

                                                
                                                
----------------------- debugLogs end: false-680935 [took: 2.943952531s] --------------------------------
helpers_test.go:175: Cleaning up "false-680935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-680935
--- PASS: TestNetworkPlugins/group/false (3.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m6.278603336s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-003594 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-003594 status -o json: exit status 2 (254.314622ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-003594","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-003594
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (58.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-003594 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (58.529247393s)
--- PASS: TestNoKubernetes/serial/Start (58.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-003594 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-003594 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.834417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (4.176285254s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.886391046s)
--- PASS: TestNoKubernetes/serial/ProfileList (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-003594
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-003594: (1.520656684s)
--- PASS: TestNoKubernetes/serial/Stop (1.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-003594 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-003594 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (24.107010738s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-003594 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-003594 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.569204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3666104726 start -p stopped-upgrade-376324 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3666104726 start -p stopped-upgrade-376324 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m2.215231963s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3666104726 -p stopped-upgrade-376324 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3666104726 -p stopped-upgrade-376324 stop: (1.354525181s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-376324 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-376324 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (37.575961912s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.15s)

                                                
                                    
x
+
TestPause/serial/Start (55.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-415204 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-415204 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (55.96483254s)
--- PASS: TestPause/serial/Start (55.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m26.555450951s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-415204 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-415204 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (52.413088298s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-376324
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-376324: (1.485209991s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m5.600448909s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m38.926693483s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-680935 "pgrep -a kubelet"
I1017 19:57:57.389677   78783 config.go:182] Loaded profile config "auto-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jj6tm" [305b730c-1bc8-4833-a545-d48222fd7de7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jj6tm" [305b730c-1bc8-4833-a545-d48222fd7de7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004653288s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.33s)

                                                
                                    
x
+
TestPause/serial/Pause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-415204 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-415204 --alsologtostderr -v=5: (1.196603672s)
--- PASS: TestPause/serial/Pause (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-415204 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-415204 --output=json --layout=cluster: exit status 2 (284.063988ms)

                                                
                                                
-- stdout --
	{"Name":"pause-415204","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-415204","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.18s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-415204 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-415204 --alsologtostderr -v=5: (1.18196126s)
--- PASS: TestPause/serial/Unpause (1.18s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-415204 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-415204 --alsologtostderr -v=5: (1.042578004s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.33s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-415204 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-415204 --alsologtostderr -v=5: (1.329181822s)
--- PASS: TestPause/serial/DeletePaused (1.33s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m25.418306885s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
E1017 19:58:35.020498   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m40.952943993s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-n6jsb" [e687d48e-9530-40a5-bc72-21bb2056609e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005663585s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-680935 "pgrep -a kubelet"
I1017 19:58:42.836093   78783 config.go:182] Loaded profile config "kindnet-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gl5r2" [bf17dd89-d1a1-4b5a-ac33-ce1ad8d46d54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gl5r2" [bf17dd89-d1a1-4b5a-ac33-ce1ad8d46d54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00680017s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m32.520326912s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gdxxt" [f938e6ff-1200-4281-8145-e23ce4de2d44] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gdxxt" [f938e6ff-1200-4281-8145-e23ce4de2d44] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.012305636s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-680935 "pgrep -a kubelet"
I1017 19:59:20.207078   78783 config.go:182] Loaded profile config "calico-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b9bpp" [7f2adac6-50c8-4fb2-8bb9-b6209e1607c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b9bpp" [7f2adac6-50c8-4fb2-8bb9-b6209e1607c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005963791s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-680935 "pgrep -a kubelet"
I1017 19:59:29.619950   78783 config.go:182] Loaded profile config "custom-flannel-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gbmjh" [8ae5e16b-1aac-48eb-af47-0d8a0a78271c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gbmjh" [8ae5e16b-1aac-48eb-af47-0d8a0a78271c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.079281433s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-680935 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false: (1m36.372813351s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (110.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-703711 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-703711 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m50.903731146s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (110.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-680935 "pgrep -a kubelet"
I1017 20:00:09.713693   78783 config.go:182] Loaded profile config "enable-default-cni-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2d99b" [7c87933e-ee56-4755-a07d-12fd280730eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2d99b" [7c87933e-ee56-4755-a07d-12fd280730eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.262077141s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-551369 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-551369 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m42.358481445s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-2dlpq" [860f9f11-345d-43ce-8cd3-04043dcfbf0d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004103077s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-680935 "pgrep -a kubelet"
I1017 20:00:52.103197   78783 config.go:182] Loaded profile config "flannel-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4fw88" [38b3e436-f45a-45ce-972a-2a832a5a0f04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4fw88" [38b3e436-f45a-45ce-972a-2a832a5a0f04] Running
E1017 20:00:57.560426   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005901908s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-650027 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-650027 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m30.209676491s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-680935 "pgrep -a kubelet"
I1017 20:01:27.152926   78783 config.go:182] Loaded profile config "bridge-680935": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-680935 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qzd5h" [c0aff545-21ec-4efa-a7d8-aab59c491b56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qzd5h" [c0aff545-21ec-4efa-a7d8-aab59c491b56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005281805s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-680935 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-680935 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E1017 20:05:09.934403   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:09.940842   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:09.952356   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:09.973828   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:10.015302   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:10.096802   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:10.258541   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:10.580217   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:10.808646   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:11.222224   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:12.504238   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-703711 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [166f4e94-807d-4702-8059-0b95cf21d44c] Pending
helpers_test.go:352: "busybox" [166f4e94-807d-4702-8059-0b95cf21d44c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [166f4e94-807d-4702-8059-0b95cf21d44c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004136367s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-703711 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-201174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-201174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m28.661238051s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-703711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-703711 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.260222666s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-703711 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-703711 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-703711 --alsologtostderr -v=3: (1m28.474502942s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-551369 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3f7b4273-bbe2-437e-8dc5-507732060c7f] Pending
helpers_test.go:352: "busybox" [3f7b4273-bbe2-437e-8dc5-507732060c7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3f7b4273-bbe2-437e-8dc5-507732060c7f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.005531709s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-551369 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-551369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-551369 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017722029s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-551369 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-551369 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-551369 --alsologtostderr -v=3: (1m25.314846445s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-650027 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1f487c84-ed10-45fa-9a4c-122c38f78f52] Pending
helpers_test.go:352: "busybox" [1f487c84-ed10-45fa-9a4c-122c38f78f52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1f487c84-ed10-45fa-9a4c-122c38f78f52] Running
E1017 20:02:57.699015   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:57.705387   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:57.716758   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:57.738148   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:57.780087   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:57.862066   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:58.023642   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:58.345907   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:02:58.988229   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:00.269778   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003903658s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-650027 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-650027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-650027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02225325s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-650027 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-650027 --alsologtostderr -v=3
E1017 20:03:02.831455   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:07.953767   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:18.092473   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:18.196131   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-650027 --alsologtostderr -v=3: (1m28.035864044s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-201174 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [db9903f5-088a-42b1-9170-42887a7c2f12] Pending
helpers_test.go:352: "busybox" [db9903f5-088a-42b1-9170-42887a7c2f12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [db9903f5-088a-42b1-9170-42887a7c2f12] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005983808s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-201174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-703711 -n old-k8s-version-703711
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-703711 -n old-k8s-version-703711: exit status 7 (69.347053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-703711 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (40.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-703711 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0
E1017 20:03:35.020305   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/functional-088611/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-703711 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.28.0: (40.31783548s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-703711 -n old-k8s-version-703711
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (40.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-201174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-201174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-201174 --alsologtostderr -v=3
E1017 20:03:36.578732   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.585125   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.596515   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.617938   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.659934   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.741577   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:36.903201   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:37.225291   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:37.867048   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:38.677653   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:39.148406   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:41.710584   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:46.832076   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:03:57.073451   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-201174 --alsologtostderr -v=3: (1m23.01096779s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-551369 -n no-preload-551369
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-551369 -n no-preload-551369: exit status 7 (89.489785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-551369 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-551369 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-551369 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (43.570517728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-551369 -n no-preload-551369
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7lf9l" [7bd1b422-a093-4401-ba2b-c434a41b38b0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1017 20:04:13.958024   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:13.964486   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:13.975904   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:13.997294   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:14.038783   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:14.120333   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:14.281875   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:14.603613   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:15.245457   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:16.527634   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:17.554807   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7lf9l" [7bd1b422-a093-4401-ba2b-c434a41b38b0] Running
E1017 20:04:19.089602   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:19.639986   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.027807026s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7lf9l" [7bd1b422-a093-4401-ba2b-c434a41b38b0] Running
E1017 20:04:24.211810   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004594422s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-703711 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-703711 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-703711 --alsologtostderr -v=1
E1017 20:04:29.831419   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:29.837859   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:29.849372   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:29.870834   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:29.912369   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:29.994094   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:30.155449   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-703711 -n old-k8s-version-703711
E1017 20:04:30.477495   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-703711 -n old-k8s-version-703711: exit status 2 (268.000391ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-703711 -n old-k8s-version-703711
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-703711 -n old-k8s-version-703711: exit status 2 (256.658448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-703711 --alsologtostderr -v=1
E1017 20:04:31.119413   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-703711 -n old-k8s-version-703711
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-703711 -n old-k8s-version-703711
E1017 20:04:32.401431   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650027 -n embed-certs-650027
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650027 -n embed-certs-650027: exit status 7 (73.02869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-650027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-650027 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-650027 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (43.858622177s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-650027 -n embed-certs-650027
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (64.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-080593 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1017 20:04:34.454150   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:34.962829   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:40.084923   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-080593 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m4.388396163s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (64.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xxsgx" [c8318838-d7b3-467b-ad02-596fd92298ad] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xxsgx" [c8318838-d7b3-467b-ad02-596fd92298ad] Running
E1017 20:04:50.326497   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005336986s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xxsgx" [c8318838-d7b3-467b-ad02-596fd92298ad] Running
E1017 20:04:54.935902   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/calico-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:04:58.516784   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/kindnet-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004625175s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-551369 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-551369 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-551369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-551369 --alsologtostderr -v=1: (1.053896016s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-551369 -n no-preload-551369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-551369 -n no-preload-551369: exit status 2 (324.2231ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-551369 -n no-preload-551369
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-551369 -n no-preload-551369: exit status 2 (305.092935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-551369 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-551369 -n no-preload-551369
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-551369 -n no-preload-551369
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174: exit status 7 (97.919526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-201174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-201174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-201174 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (52.466253216s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dt62" [f8ba59da-7e63-413b-a918-cda9bd7a525a] Running
E1017 20:05:15.066530   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013768076s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8dt62" [f8ba59da-7e63-413b-a918-cda9bd7a525a] Running
E1017 20:05:20.188414   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005614015s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-650027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-650027 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-650027 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650027 -n embed-certs-650027
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650027 -n embed-certs-650027: exit status 2 (282.607486ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-650027 -n embed-certs-650027
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-650027 -n embed-certs-650027: exit status 2 (293.633154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-650027 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-650027 -n embed-certs-650027
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-650027 -n embed-certs-650027
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-080593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-080593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18664934s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-080593 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-080593 --alsologtostderr -v=3: (2.075270002s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-080593 -n newest-cni-080593
E1017 20:05:41.561553   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/auto-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-080593 -n newest-cni-080593: exit status 7 (78.061684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-080593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-080593 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1
E1017 20:05:45.864757   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:45.871139   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:45.882537   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:45.904001   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:45.945512   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:46.027028   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:46.188615   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:46.510073   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:47.152356   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:48.433831   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:50.912844   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/enable-default-cni-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:50.995402   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:51.770902   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/custom-flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-080593 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --auto-update-drivers=false --kubernetes-version=v1.34.1: (33.000142244s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-080593 -n newest-cni-080593
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gl9xw" [babb0881-33fe-4d89-8818-dac0e0e5e766] Running
E1017 20:05:56.117683   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:57.561036   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/addons-574638/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005966704s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gl9xw" [babb0881-33fe-4d89-8818-dac0e0e5e766] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005464131s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-201174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-201174 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-201174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174: exit status 2 (303.675191ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174: exit status 2 (272.538706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-201174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
E1017 20:06:06.360025   78783 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21753-74819/.minikube/profiles/flannel-680935/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-201174 -n default-k8s-diff-port-201174
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-080593 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-080593 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-080593 -n newest-cni-080593
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-080593 -n newest-cni-080593: exit status 2 (248.983957ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-080593 -n newest-cni-080593
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-080593 -n newest-cni-080593: exit status 2 (246.996838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-080593 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-080593 -n newest-cni-080593
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-080593 -n newest-cni-080593
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.80s)

                                                
                                    

Test skip (39/330)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.06
267 TestNetworkPlugins/group/cilium 3.84
282 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-680935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-680935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-680935"

                                                
                                                
----------------------- debugLogs end: kubenet-680935 [took: 2.904582216s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-680935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-680935
--- SKIP: TestNetworkPlugins/group/kubenet (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-680935 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-680935" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-680935

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-680935" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-680935"

                                                
                                                
----------------------- debugLogs end: cilium-680935 [took: 3.684462708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-680935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-680935
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-810045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-810045
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard