Test Report: Docker_Linux_containerd 20109

                    
                      a80036b9799ef97ff87d49db0998430356d1f02a:2025-01-20:37996
                    
                

Test fail (4/330)

Order failed test Duration
183 TestJSONOutput/start/Command 2400.01
189 TestJSONOutput/pause/Command 0
195 TestJSONOutput/unpause/Command 0
201 TestJSONOutput/stop/Command 0
x
+
TestJSONOutput/start/Command (2400.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-324440 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0120 15:20:26.230990  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:21:05.156183  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:21:32.860866  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:25:26.230847  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:26:05.152581  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:26:49.300342  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:30:26.231417  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:31:05.159367  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:32:28.222690  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:35:26.230735  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:36:05.156206  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:40:26.230475  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:41:05.156126  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:43:29.304561  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:45:26.230942  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:46:05.156265  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:49:08.224170  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:50:26.230679  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:51:05.156206  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:55:26.231495  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:56:05.156082  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-324440 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: signal: killed (40m0.007328464s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c77f41f0-2e3e-480f-a060-e32f6e9adcd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-324440] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9623dae-f010-4478-b5ff-50a03b73f0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"1f8a8286-7399-40ea-810a-610b6ea94b51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"676e9a58-0d3b-4dde-87cb-ce4d6bdadba6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig"}}
	{"specversion":"1.0","id":"ea51c37e-a276-44fe-9a96-ecae9e943618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube"}}
	{"specversion":"1.0","id":"5a6a696c-040c-46a9-8666-0cc6847f8de3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8a79471a-3f16-4335-bb93-62eeac09ba79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d649b267-cca4-4546-a08f-972f81aed9cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4978989-953c-45ee-bca0-118671ecfb5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8251e453-c753-49f1-adfb-907520bfc9c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-324440\" primary control-plane node in \"json-output-324440\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"31e589a5-36ba-42cf-8d06-1e5ac66b6c25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fba16152-55f4-4880-b4df-36c6a6e7cb24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2200MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"87895d4c-bc8b-48ab-ac03-d546af727787","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.32.0 on containerd 1.7.24 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d64e29f-6bed-4e31-ba6f-a3a698c1eaf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"51149ed6-58b7-4ba3-9296-8322f18ef97a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"b20d0168-4224-43cb-8a4f-ee1348b89809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"14","message":"Configuring RBAC rules ...","name":"Configuring RBAC rules","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8713240-58e9-4ba5-8c6c-ee50502abfcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"15","message":"Configuring CNI (Container Networking Interface) ...","name":"Configuring CNI","totalsteps":"19"}}
	{"specversion":"1.0","id":"96158249-2f5a-4011-bc46-f0afac6cacbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"17","message":"Verifying Kubernetes components...","name":"Verifying Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a62ef83-3da0-40d9-869f-4ada5566a70b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using image gcr.io/k8s-minikube/storage-provisioner:v5"}}
	{"specversion":"1.0","id":"d6ab5efa-5fad-4860-ad01-c1a8e9f1fc67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"18","message":"Enabled addons: storage-provisioner, default-storageclass","name":"Enabling Addons","totalsteps":"19"}}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 15:20:03.930324  451059 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-324440 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd": signal: killed
--- FAIL: TestJSONOutput/start/Command (2400.01s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-324440 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-324440 --output=json --user=testUser: context deadline exceeded (1.975µs)
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-324440 --output=json --user=testUser": context deadline exceeded
--- FAIL: TestJSONOutput/pause/Command (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-324440 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-324440 --output=json --user=testUser: context deadline exceeded (837ns)
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-324440 --output=json --user=testUser": context deadline exceeded
--- FAIL: TestJSONOutput/unpause/Command (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-324440 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p json-output-324440 --output=json --user=testUser: context deadline exceeded (602ns)
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 stop -p json-output-324440 --output=json --user=testUser": context deadline exceeded
--- FAIL: TestJSONOutput/stop/Command (0.00s)

                                                
                                    

Test pass (302/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.0/json-events 3.98
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.76
18 TestDownloadOnly/v1.32.0/DeleteAll 0.22
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.13
21 TestBinaryMirror 0.78
22 TestOffline 58.93
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 219.72
29 TestAddons/serial/Volcano 39.37
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.46
35 TestAddons/parallel/Registry 17.97
36 TestAddons/parallel/Ingress 21.15
37 TestAddons/parallel/InspektorGadget 12.04
38 TestAddons/parallel/MetricsServer 5.72
40 TestAddons/parallel/CSI 49.9
41 TestAddons/parallel/Headlamp 18.59
42 TestAddons/parallel/CloudSpanner 5.68
43 TestAddons/parallel/LocalPath 53.63
44 TestAddons/parallel/NvidiaDevicePlugin 5.63
45 TestAddons/parallel/Yakd 10.8
46 TestAddons/parallel/AmdGpuDevicePlugin 5.64
47 TestAddons/StoppedEnableDisable 12.2
48 TestCertOptions 26.16
49 TestCertExpiration 216.64
51 TestForceSystemdFlag 28.1
52 TestForceSystemdEnv 37.42
53 TestDockerEnvContainerd 37.64
54 TestKVMDriverInstallOrUpdate 4.38
58 TestErrorSpam/setup 21.16
59 TestErrorSpam/start 0.6
60 TestErrorSpam/status 0.88
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.56
63 TestErrorSpam/stop 1.38
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 74.85
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.5
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
75 TestFunctional/serial/CacheCmd/cache/add_local 1.73
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.39
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.39
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 11.65
91 TestFunctional/parallel/DryRun 0.34
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.91
97 TestFunctional/parallel/ServiceCmdConnect 7.71
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 32.88
101 TestFunctional/parallel/SSHCmd 0.53
102 TestFunctional/parallel/CpCmd 1.8
103 TestFunctional/parallel/MySQL 22.54
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.82
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.21
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
117 TestFunctional/parallel/Version/short 0.06
118 TestFunctional/parallel/Version/components 0.68
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.69
124 TestFunctional/parallel/ImageCommands/Setup 1.56
125 TestFunctional/parallel/ServiceCmd/DeployApp 17.18
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.85
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.3
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.99
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
138 TestFunctional/parallel/ServiceCmd/List 0.52
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
141 TestFunctional/parallel/ServiceCmd/Format 0.34
142 TestFunctional/parallel/ServiceCmd/URL 0.33
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
150 TestFunctional/parallel/ProfileCmd/profile_list 0.39
151 TestFunctional/parallel/MountCmd/any-port 7.88
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
153 TestFunctional/parallel/MountCmd/specific-port 2.08
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 94.41
162 TestMultiControlPlane/serial/DeployApp 5.28
163 TestMultiControlPlane/serial/PingHostFromPods 1.08
164 TestMultiControlPlane/serial/AddWorkerNode 21.79
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 16.24
168 TestMultiControlPlane/serial/StopSecondaryNode 12.54
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 15.37
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 122.82
173 TestMultiControlPlane/serial/DeleteSecondaryNode 9.2
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
175 TestMultiControlPlane/serial/StopCluster 35.77
176 TestMultiControlPlane/serial/RestartCluster 80.91
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 39.74
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 27.51
209 TestKicCustomNetwork/use_default_bridge_network 24.98
210 TestKicExistingNetwork 25.12
211 TestKicCustomSubnet 25.59
212 TestKicStaticIP 25.82
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 53.21
217 TestMountStart/serial/StartWithMountFirst 5.38
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 8.19
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.58
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.17
224 TestMountStart/serial/RestartStopped 6.92
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 58.69
229 TestMultiNode/serial/DeployApp2Nodes 15.63
230 TestMultiNode/serial/PingHostFrom2Pods 0.72
231 TestMultiNode/serial/AddNode 17.26
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.63
234 TestMultiNode/serial/CopyFile 9.22
235 TestMultiNode/serial/StopNode 2.13
236 TestMultiNode/serial/StartAfterStop 8.52
237 TestMultiNode/serial/RestartKeepsNodes 81.44
238 TestMultiNode/serial/DeleteNode 4.99
239 TestMultiNode/serial/StopMultiNode 23.83
240 TestMultiNode/serial/RestartMultiNode 51.69
241 TestMultiNode/serial/ValidateNameConflict 22.84
246 TestPreload 107.67
248 TestScheduledStopUnix 98.67
251 TestInsufficientStorage 9.92
252 TestRunningBinaryUpgrade 74.48
254 TestKubernetesUpgrade 322.45
255 TestMissingContainerUpgrade 160.75
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
258 TestNoKubernetes/serial/StartWithK8s 35.63
259 TestNoKubernetes/serial/StartWithStopK8s 17.23
260 TestNoKubernetes/serial/Start 6.68
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
262 TestNoKubernetes/serial/ProfileList 2.13
263 TestNoKubernetes/serial/Stop 2.79
264 TestNoKubernetes/serial/StartNoArgs 6.1
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
266 TestStoppedBinaryUpgrade/Setup 0.36
267 TestStoppedBinaryUpgrade/Upgrade 84.02
275 TestNetworkPlugins/group/false 3.16
287 TestPause/serial/Start 43.7
288 TestPause/serial/SecondStartNoReconfiguration 5.84
289 TestPause/serial/Pause 0.76
290 TestPause/serial/VerifyStatus 0.32
291 TestPause/serial/Unpause 0.7
292 TestPause/serial/PauseAgain 0.78
293 TestPause/serial/DeletePaused 4.84
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
295 TestPause/serial/VerifyDeletedResources 13.88
296 TestNetworkPlugins/group/auto/Start 43.68
297 TestNetworkPlugins/group/kindnet/Start 43.06
298 TestNetworkPlugins/group/calico/Start 53.66
299 TestNetworkPlugins/group/auto/KubeletFlags 0.31
300 TestNetworkPlugins/group/auto/NetCatPod 8.24
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/auto/DNS 0.16
303 TestNetworkPlugins/group/auto/Localhost 0.12
304 TestNetworkPlugins/group/auto/HairPin 0.12
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
306 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
307 TestNetworkPlugins/group/kindnet/DNS 0.14
308 TestNetworkPlugins/group/kindnet/Localhost 0.12
309 TestNetworkPlugins/group/kindnet/HairPin 0.12
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/custom-flannel/Start 41.7
312 TestNetworkPlugins/group/calico/KubeletFlags 0.34
313 TestNetworkPlugins/group/calico/NetCatPod 10.8
314 TestNetworkPlugins/group/calico/DNS 0.17
315 TestNetworkPlugins/group/calico/Localhost 0.12
316 TestNetworkPlugins/group/calico/HairPin 0.16
317 TestNetworkPlugins/group/enable-default-cni/Start 62.9
318 TestNetworkPlugins/group/flannel/Start 43.93
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
321 TestNetworkPlugins/group/custom-flannel/DNS 0.15
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
324 TestNetworkPlugins/group/bridge/Start 62.51
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
329 TestNetworkPlugins/group/flannel/NetCatPod 9.25
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
334 TestStartStop/group/old-k8s-version/serial/FirstStart 134.17
335 TestNetworkPlugins/group/flannel/DNS 0.18
336 TestNetworkPlugins/group/flannel/Localhost 0.18
337 TestNetworkPlugins/group/flannel/HairPin 0.15
339 TestStartStop/group/no-preload/serial/FirstStart 65.64
341 TestStartStop/group/embed-certs/serial/FirstStart 46.36
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
343 TestNetworkPlugins/group/bridge/NetCatPod 9.21
344 TestNetworkPlugins/group/bridge/DNS 0.16
345 TestNetworkPlugins/group/bridge/Localhost 0.15
346 TestNetworkPlugins/group/bridge/HairPin 0.11
347 TestStartStop/group/embed-certs/serial/DeployApp 8.25
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.56
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
351 TestStartStop/group/embed-certs/serial/Stop 11.98
352 TestStartStop/group/no-preload/serial/DeployApp 8.25
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
355 TestStartStop/group/embed-certs/serial/SecondStart 263.09
356 TestStartStop/group/no-preload/serial/Stop 11.93
357 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/SecondStart 263.24
359 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
361 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
362 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.39
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
366 TestStartStop/group/old-k8s-version/serial/Stop 12.43
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
368 TestStartStop/group/old-k8s-version/serial/SecondStart 125.59
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
371 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
372 TestStartStop/group/old-k8s-version/serial/Pause 2.62
374 TestStartStop/group/newest-cni/serial/FirstStart 26.72
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
377 TestStartStop/group/newest-cni/serial/Stop 1.76
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
379 TestStartStop/group/newest-cni/serial/SecondStart 12.9
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/newest-cni/serial/Pause 2.89
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/embed-certs/serial/Pause 2.68
388 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
391 TestStartStop/group/no-preload/serial/Pause 2.63
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.57
x
+
TestDownloadOnly/v1.20.0/json-events (7.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-516652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-516652 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.241941886s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0120 15:01:38.762130  348924 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0120 15:01:38.762264  348924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-341858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-516652
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-516652: exit status 85 (66.123758ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-516652 | jenkins | v1.35.0 | 20 Jan 25 15:01 UTC |          |
	|         | -p download-only-516652        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:01:31
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:01:31.566127  348936 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:01:31.566275  348936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:01:31.566282  348936 out.go:358] Setting ErrFile to fd 2...
	I0120 15:01:31.566289  348936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:01:31.566693  348936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	W0120 15:01:31.567193  348936 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20109-341858/.minikube/config/config.json: open /home/jenkins/minikube-integration/20109-341858/.minikube/config/config.json: no such file or directory
	I0120 15:01:31.567894  348936 out.go:352] Setting JSON to true
	I0120 15:01:31.568915  348936 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17038,"bootTime":1737368254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:01:31.569023  348936 start.go:139] virtualization: kvm guest
	I0120 15:01:31.571754  348936 out.go:97] [download-only-516652] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0120 15:01:31.571889  348936 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20109-341858/.minikube/cache/preloaded-tarball: no such file or directory
	I0120 15:01:31.571966  348936 notify.go:220] Checking for updates...
	I0120 15:01:31.573321  348936 out.go:169] MINIKUBE_LOCATION=20109
	I0120 15:01:31.574949  348936 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:01:31.576320  348936 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	I0120 15:01:31.577551  348936 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	I0120 15:01:31.578779  348936 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 15:01:31.580983  348936 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 15:01:31.581264  348936 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:01:31.604905  348936 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 15:01:31.604991  348936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:01:31.972534  348936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 15:01:31.962123686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:01:31.972665  348936 docker.go:318] overlay module found
	I0120 15:01:31.974219  348936 out.go:97] Using the docker driver based on user configuration
	I0120 15:01:31.974252  348936 start.go:297] selected driver: docker
	I0120 15:01:31.974260  348936 start.go:901] validating driver "docker" against <nil>
	I0120 15:01:31.974400  348936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:01:32.023918  348936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 15:01:32.015377951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:01:32.024193  348936 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 15:01:32.024948  348936 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0120 15:01:32.025181  348936 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 15:01:32.026997  348936 out.go:169] Using Docker driver with root privileges
	I0120 15:01:32.028122  348936 cni.go:84] Creating CNI manager for ""
	I0120 15:01:32.028202  348936 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0120 15:01:32.028214  348936 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0120 15:01:32.028293  348936 start.go:340] cluster config:
	{Name:download-only-516652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-516652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:01:32.029554  348936 out.go:97] Starting "download-only-516652" primary control-plane node in "download-only-516652" cluster
	I0120 15:01:32.029580  348936 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0120 15:01:32.030807  348936 out.go:97] Pulling base image v0.0.46 ...
	I0120 15:01:32.030834  348936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 15:01:32.030940  348936 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0120 15:01:32.047339  348936 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 15:01:32.047584  348936 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0120 15:01:32.047685  348936 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0120 15:01:32.065586  348936 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0120 15:01:32.065617  348936 cache.go:56] Caching tarball of preloaded images
	I0120 15:01:32.065764  348936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0120 15:01:32.067665  348936 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0120 15:01:32.067686  348936 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0120 15:01:32.094814  348936 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20109-341858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0120 15:01:36.116007  348936 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	
	
	* The control-plane node download-only-516652 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516652"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-516652
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (3.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-898991 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-898991 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (3.974817938s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (3.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I0120 15:01:43.158122  348924 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime containerd
I0120 15:01:43.158191  348924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-341858/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-898991
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-898991: exit status 85 (755.885388ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-516652 | jenkins | v1.35.0 | 20 Jan 25 15:01 UTC |                     |
	|         | -p download-only-516652        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 20 Jan 25 15:01 UTC | 20 Jan 25 15:01 UTC |
	| delete  | -p download-only-516652        | download-only-516652 | jenkins | v1.35.0 | 20 Jan 25 15:01 UTC | 20 Jan 25 15:01 UTC |
	| start   | -o=json --download-only        | download-only-898991 | jenkins | v1.35.0 | 20 Jan 25 15:01 UTC |                     |
	|         | -p download-only-898991        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/20 15:01:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0120 15:01:39.225999  349298 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:01:39.226236  349298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:01:39.226245  349298 out.go:358] Setting ErrFile to fd 2...
	I0120 15:01:39.226249  349298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:01:39.226435  349298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 15:01:39.227002  349298 out.go:352] Setting JSON to true
	I0120 15:01:39.227861  349298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17045,"bootTime":1737368254,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:01:39.228002  349298 start.go:139] virtualization: kvm guest
	I0120 15:01:39.230070  349298 out.go:97] [download-only-898991] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:01:39.230174  349298 notify.go:220] Checking for updates...
	I0120 15:01:39.231517  349298 out.go:169] MINIKUBE_LOCATION=20109
	I0120 15:01:39.232758  349298 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:01:39.233970  349298 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	I0120 15:01:39.235160  349298 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	I0120 15:01:39.236338  349298 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0120 15:01:39.238555  349298 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0120 15:01:39.238773  349298 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:01:39.263631  349298 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 15:01:39.263733  349298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:01:39.312422  349298 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 15:01:39.303703169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:01:39.312534  349298 docker.go:318] overlay module found
	I0120 15:01:39.314243  349298 out.go:97] Using the docker driver based on user configuration
	I0120 15:01:39.314272  349298 start.go:297] selected driver: docker
	I0120 15:01:39.314281  349298 start.go:901] validating driver "docker" against <nil>
	I0120 15:01:39.314372  349298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:01:39.365943  349298 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:47 SystemTime:2025-01-20 15:01:39.356935877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:01:39.366126  349298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0120 15:01:39.366615  349298 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0120 15:01:39.366772  349298 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0120 15:01:39.368477  349298 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-898991 host does not exist
	  To start a cluster, run: "minikube start -p download-only-898991"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-898991
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.13s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-654339 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-654339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-654339
--- PASS: TestDownloadOnlyKic (1.13s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I0120 15:01:45.668612  348924 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-094781 --alsologtostderr --binary-mirror http://127.0.0.1:43157 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-094781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-094781
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (58.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-356850 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-356850 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (54.432236722s)
helpers_test.go:175: Cleaning up "offline-containerd-356850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-356850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-356850: (4.500520606s)
--- PASS: TestOffline (58.93s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-766086
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-766086: exit status 85 (57.787576ms)

                                                
                                                
-- stdout --
	* Profile "addons-766086" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-766086"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-766086
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-766086: exit status 85 (57.303589ms)

                                                
                                                
-- stdout --
	* Profile "addons-766086" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-766086"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (219.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-766086 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-766086 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m39.719665228s)
--- PASS: TestAddons/Setup (219.72s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 15.88069ms
addons_test.go:807: volcano-scheduler stabilized in 15.937036ms
addons_test.go:823: volcano-controller stabilized in 16.018199ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-8bbtc" [738f5bb0-786d-4de6-97f2-2ca8444a007d] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003710775s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-w9254" [5115ce0b-2c4e-48f0-a2f0-6f8fc6aa4713] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003623259s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-v8hmc" [a7ed711d-db19-4057-8d34-201b71e5536e] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002875898s
addons_test.go:842: (dbg) Run:  kubectl --context addons-766086 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-766086 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-766086 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [bf457993-6688-4a62-9a00-35a1e056312c] Pending
helpers_test.go:344: "test-job-nginx-0" [bf457993-6688-4a62-9a00-35a1e056312c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [bf457993-6688-4a62-9a00-35a1e056312c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00388118s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable volcano --alsologtostderr -v=1: (11.017682937s)
--- PASS: TestAddons/serial/Volcano (39.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-766086 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-766086 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-766086 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-766086 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48ee313d-5dd1-4aa5-880f-60883718ed8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48ee313d-5dd1-4aa5-880f-60883718ed8e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003901175s
addons_test.go:633: (dbg) Run:  kubectl --context addons-766086 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-766086 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-766086 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.111452ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-lc5tm" [5183d335-0536-4d88-9919-684e3a53a427] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.012920718s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l6fn2" [ed978c30-8633-4e03-9004-4d837a82576a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00348459s
addons_test.go:331: (dbg) Run:  kubectl --context addons-766086 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-766086 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-766086 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.136090503s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 ip
2025/01/20 15:06:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-766086 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-766086 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-766086 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a04d9585-21e9-4931-872b-5d4142bd09c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a04d9585-21e9-4931-872b-5d4142bd09c1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003748832s
I0120 15:07:08.648505  348924 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-766086 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable ingress-dns --alsologtostderr -v=1: (1.959634961s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable ingress --alsologtostderr -v=1: (7.980899727s)
--- PASS: TestAddons/parallel/Ingress (21.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-95rkz" [fd7f8e95-ad4f-457c-8670-0e2cc0a0a721] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004499681s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable inspektor-gadget --alsologtostderr -v=1: (6.035651007s)
--- PASS: TestAddons/parallel/InspektorGadget (12.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 45.880949ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-hsxqt" [83e823c9-92df-4df3-83a7-e97167e9d951] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004314381s
addons_test.go:402: (dbg) Run:  kubectl --context addons-766086 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0120 15:06:28.751346  348924 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.749903ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-766086 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-766086 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0626089d-47fe-4790-a43b-052fdd82cdf0] Pending
helpers_test.go:344: "task-pv-pod" [0626089d-47fe-4790-a43b-052fdd82cdf0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0626089d-47fe-4790-a43b-052fdd82cdf0] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.009955213s
addons_test.go:511: (dbg) Run:  kubectl --context addons-766086 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-766086 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-766086 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-766086 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-766086 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-766086 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-766086 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a8897d2e-6b16-4f98-980e-239820658e01] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a8897d2e-6b16-4f98-980e-239820658e01] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004284693s
addons_test.go:553: (dbg) Run:  kubectl --context addons-766086 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-766086 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-766086 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.779830966s)
--- PASS: TestAddons/parallel/CSI (49.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-766086 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-789jp" [723ae60d-f2d5-42a1-81a0-6b121cb56034] Pending
helpers_test.go:344: "headlamp-69d78d796f-789jp" [723ae60d-f2d5-42a1-81a0-6b121cb56034] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-789jp" [723ae60d-f2d5-42a1-81a0-6b121cb56034] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003469848s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable headlamp --alsologtostderr -v=1: (5.672826645s)
--- PASS: TestAddons/parallel/Headlamp (18.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-7qxjh" [ff7afa80-93fd-4e8f-a7f9-289a4ccdf829] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004262474s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-766086 apply -f testdata/storage-provisioner-rancher/pvc.yaml
I0120 15:06:28.763000  348924 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 15:06:28.763045  348924 kapi.go:107] duration metric: took 11.732197ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:894: (dbg) Run:  kubectl --context addons-766086 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b7334a98-c88b-40ce-93b1-5cc60f901ce7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b7334a98-c88b-40ce-93b1-5cc60f901ce7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b7334a98-c88b-40ce-93b1-5cc60f901ce7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003383086s
addons_test.go:906: (dbg) Run:  kubectl --context addons-766086 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 ssh "cat /opt/local-path-provisioner/pvc-68e6755a-e8da-4604-8a18-46074952ecc9_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-766086 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-766086 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.589844335s)
--- PASS: TestAddons/parallel/LocalPath (53.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xg257" [8620242b-3279-446e-94d2-3eb6062b5fec] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003321376s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-ds6w9" [f0ab5648-7b08-4470-9682-9ad9ef1c071c] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003436856s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-766086 addons disable yakd --alsologtostderr -v=1: (5.796001417s)
--- PASS: TestAddons/parallel/Yakd (10.80s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-9zh8c" [a6ea4ff2-b3a3-41c4-b328-9d89ae1df8e1] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004433694s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-766086 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-766086
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-766086: (11.924633848s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-766086
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-766086
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-766086
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (26.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-444785 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-444785 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (21.975702482s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-444785 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-444785 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-444785 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-444785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-444785
E0120 16:16:05.152170  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-444785: (3.532598323s)
--- PASS: TestCertOptions (26.16s)

                                                
                                    
x
+
TestCertExpiration (216.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-157019 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-157019 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (27.967343456s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-157019 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-157019 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.998812968s)
helpers_test.go:175: Cleaning up "cert-expiration-157019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-157019
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-157019: (2.675685882s)
--- PASS: TestCertExpiration (216.64s)

                                                
                                    
x
+
TestForceSystemdFlag (28.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-391242 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-391242 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (25.59463749s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-391242 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-391242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-391242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-391242: (2.245592437s)
--- PASS: TestForceSystemdFlag (28.10s)

                                                
                                    
x
+
TestForceSystemdEnv (37.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-379594 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-379594 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.322715876s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-379594 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-379594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-379594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-379594: (5.740507215s)
--- PASS: TestForceSystemdEnv (37.42s)

                                                
                                    
x
+
TestDockerEnvContainerd (37.64s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-776873 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-776873 --driver=docker  --container-runtime=containerd: (22.201996783s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-776873"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o8dNI26y05Lh/agent.376004" SSH_AGENT_PID="376005" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o8dNI26y05Lh/agent.376004" SSH_AGENT_PID="376005" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o8dNI26y05Lh/agent.376004" SSH_AGENT_PID="376005" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.650611879s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-o8dNI26y05Lh/agent.376004" SSH_AGENT_PID="376005" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-776873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-776873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-776873: (1.877237677s)
--- PASS: TestDockerEnvContainerd (37.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0120 16:14:36.864632  348924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 16:14:36.864768  348924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0120 16:14:36.894911  348924 install.go:62] docker-machine-driver-kvm2: exit status 1
W0120 16:14:36.895269  348924 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 16:14:36.895330  348924 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate336550047/001/docker-machine-driver-kvm2
I0120 16:14:37.129346  348924 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate336550047/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000619220 gz:0xc000619228 tar:0xc0006191d0 tar.bz2:0xc0006191e0 tar.gz:0xc0006191f0 tar.xz:0xc000619200 tar.zst:0xc000619210 tbz2:0xc0006191e0 tgz:0xc0006191f0 txz:0xc000619200 tzst:0xc000619210 xz:0xc000619230 zip:0xc000619240 zst:0xc000619238] Getters:map[file:0xc001e30af0 http:0xc001d90460 https:0xc001d904b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 16:14:37.129409  348924 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate336550047/001/docker-machine-driver-kvm2
I0120 16:14:39.451280  348924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 16:14:39.451440  348924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 16:14:39.487747  348924 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0120 16:14:39.487785  348924 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0120 16:14:39.487857  348924 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0120 16:14:39.487886  348924 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate336550047/002/docker-machine-driver-kvm2
I0120 16:14:39.542273  348924 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate336550047/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660 0x530a660] Decompressors:map[bz2:0xc000619220 gz:0xc000619228 tar:0xc0006191d0 tar.bz2:0xc0006191e0 tar.gz:0xc0006191f0 tar.xz:0xc000619200 tar.zst:0xc000619210 tbz2:0xc0006191e0 tgz:0xc0006191f0 txz:0xc000619200 tzst:0xc000619210 xz:0xc000619230 zip:0xc000619240 zst:0xc000619238] Getters:map[file:0xc000225c40 http:0xc001bfcaf0 https:0xc001bfcb40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0120 16:14:39.542327  348924 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate336550047/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.38s)

                                                
                                    
x
+
TestErrorSpam/setup (21.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-977962 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-977962 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-977962 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-977962 --driver=docker  --container-runtime=containerd: (21.155972637s)
--- PASS: TestErrorSpam/setup (21.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 start --dry-run
--- PASS: TestErrorSpam/start (0.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 stop: (1.188470452s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-977962 --log_dir /tmp/nospam-977962 stop
--- PASS: TestErrorSpam/stop (1.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20109-341858/.minikube/files/etc/test/nested/copy/348924/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-961919 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m14.852104798s)
--- PASS: TestFunctional/serial/StartWithProxy (74.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0120 15:10:03.517180  348924 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-961919 --alsologtostderr -v=8: (5.499262108s)
functional_test.go:663: soft start took 5.500026527s for "functional-961919" cluster.
I0120 15:10:09.016804  348924 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (5.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-961919 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 cache add registry.k8s.io/pause:3.1: (1.018740091s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-961919 /tmp/TestFunctionalserialCacheCmdcacheadd_local1966483768/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache add minikube-local-cache-test:functional-961919
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 cache add minikube-local-cache-test:functional-961919: (1.411850028s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache delete minikube-local-cache-test:functional-961919
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-961919
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.128512ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 kubectl -- --context functional-961919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-961919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0120 15:10:26.236153  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.242512  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.253860  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.275278  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.316690  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.398162  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.559698  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:26.881191  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:27.523301  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:28.804932  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:31.367870  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:36.489524  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:10:46.731764  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-961919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.392795368s)
functional_test.go:761: restart took 41.392923652s for "functional-961919" cluster.
I0120 15:10:57.312734  348924 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (41.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-961919 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 logs: (1.354084517s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 logs --file /tmp/TestFunctionalserialLogsFileCmd2073033562/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 logs --file /tmp/TestFunctionalserialLogsFileCmd2073033562/001/logs.txt: (1.389184888s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-961919 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-961919
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-961919: exit status 115 (355.079606ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32173 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-961919 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 config get cpus: exit status 14 (76.667247ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 config get cpus: exit status 14 (76.677332ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-961919 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-961919 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 397556: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-961919 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (146.732322ms)

                                                
                                                
-- stdout --
	* [functional-961919] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:11:27.947354  396750 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:11:27.947644  396750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:11:27.947653  396750 out.go:358] Setting ErrFile to fd 2...
	I0120 15:11:27.947658  396750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:11:27.947834  396750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 15:11:27.948438  396750 out.go:352] Setting JSON to false
	I0120 15:11:27.949627  396750 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17634,"bootTime":1737368254,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:11:27.949733  396750 start.go:139] virtualization: kvm guest
	I0120 15:11:27.951991  396750 out.go:177] * [functional-961919] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 15:11:27.954046  396750 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:11:27.954067  396750 notify.go:220] Checking for updates...
	I0120 15:11:27.956640  396750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:11:27.957868  396750 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	I0120 15:11:27.959031  396750 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	I0120 15:11:27.960131  396750 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:11:27.961978  396750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:11:27.963548  396750 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 15:11:27.964061  396750 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:11:27.987937  396750 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 15:11:27.988061  396750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:11:28.035563  396750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-20 15:11:28.026940113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:11:28.035677  396750 docker.go:318] overlay module found
	I0120 15:11:28.037609  396750 out.go:177] * Using the docker driver based on existing profile
	I0120 15:11:28.039523  396750 start.go:297] selected driver: docker
	I0120 15:11:28.039538  396750 start.go:901] validating driver "docker" against &{Name:functional-961919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-961919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:11:28.039649  396750 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:11:28.041551  396750 out.go:201] 
	W0120 15:11:28.042667  396750 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0120 15:11:28.043865  396750 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-961919 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-961919 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (160.138254ms)

                                                
                                                
-- stdout --
	* [functional-961919] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:11:28.287776  396949 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:11:28.287895  396949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:11:28.287904  396949 out.go:358] Setting ErrFile to fd 2...
	I0120 15:11:28.287908  396949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:11:28.288266  396949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 15:11:28.288854  396949 out.go:352] Setting JSON to false
	I0120 15:11:28.289959  396949 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17634,"bootTime":1737368254,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 15:11:28.290114  396949 start.go:139] virtualization: kvm guest
	I0120 15:11:28.292247  396949 out.go:177] * [functional-961919] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0120 15:11:28.293640  396949 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 15:11:28.293643  396949 notify.go:220] Checking for updates...
	I0120 15:11:28.296174  396949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 15:11:28.297584  396949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	I0120 15:11:28.298772  396949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	I0120 15:11:28.299859  396949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 15:11:28.300976  396949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 15:11:28.302618  396949 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 15:11:28.303096  396949 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 15:11:28.325919  396949 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 15:11:28.326054  396949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:11:28.384280  396949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-01-20 15:11:28.373720219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:11:28.384391  396949 docker.go:318] overlay module found
	I0120 15:11:28.387206  396949 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0120 15:11:28.388506  396949 start.go:297] selected driver: docker
	I0120 15:11:28.388540  396949 start.go:901] validating driver "docker" against &{Name:functional-961919 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:functional-961919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0120 15:11:28.388640  396949 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 15:11:28.390648  396949 out.go:201] 
	W0120 15:11:28.392139  396949 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0120 15:11:28.393400  396949 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-961919 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-961919 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-stgjq" [0638ed7d-6e1a-45a4-9847-ffade4ee8b12] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-stgjq" [0638ed7d-6e1a-45a4-9847-ffade4ee8b12] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003849862s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31619
functional_test.go:1675: http://192.168.49.2:31619: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-stgjq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31619
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2ee9f17d-39b6-40a4-abe9-500f0df22f4c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.060107926s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-961919 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-961919 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-961919 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-961919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6d626aa2-2bc3-49ea-b2db-ea207bd639de] Pending
helpers_test.go:344: "sp-pod" [6d626aa2-2bc3-49ea-b2db-ea207bd639de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6d626aa2-2bc3-49ea-b2db-ea207bd639de] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.007021431s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-961919 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-961919 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-961919 delete -f testdata/storage-provisioner/pod.yaml: (2.990028032s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-961919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0cd0a871-7997-4829-a0d0-b45b9c30a87c] Pending
helpers_test.go:344: "sp-pod" [0cd0a871-7997-4829-a0d0-b45b9c30a87c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0cd0a871-7997-4829-a0d0-b45b9c30a87c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.002993411s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-961919 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh -n functional-961919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cp functional-961919:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1216323635/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh -n functional-961919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh -n functional-961919 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-961919 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-6x5jm" [3a252393-ee40-4856-a68b-e80cff81e816] Pending
helpers_test.go:344: "mysql-58ccfd96bb-6x5jm" [3a252393-ee40-4856-a68b-e80cff81e816] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-6x5jm" [3a252393-ee40-4856-a68b-e80cff81e816] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003983018s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;": exit status 1 (109.484303ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 15:11:21.265160  348924 retry.go:31] will retry after 1.10655317s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;": exit status 1 (116.395318ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 15:11:22.488383  348924 retry.go:31] will retry after 1.425309994s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;": exit status 1 (136.677491ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0120 15:11:24.051409  348924 retry.go:31] will retry after 3.326001766s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-961919 exec mysql-58ccfd96bb-6x5jm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/348924/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /etc/test/nested/copy/348924/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/348924.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /etc/ssl/certs/348924.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/348924.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /usr/share/ca-certificates/348924.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3489242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /etc/ssl/certs/3489242.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3489242.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /usr/share/ca-certificates/3489242.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-961919 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "sudo systemctl is-active docker": exit status 1 (313.656394ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "sudo systemctl is-active crio": exit status 1 (292.375369ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-961919 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-961919
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-961919
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-961919 image ls --format short --alsologtostderr:
I0120 15:11:34.524732  398598 out.go:345] Setting OutFile to fd 1 ...
I0120 15:11:34.525029  398598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:34.525043  398598 out.go:358] Setting ErrFile to fd 2...
I0120 15:11:34.525050  398598 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:34.525346  398598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
I0120 15:11:34.525984  398598 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:34.526105  398598 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:34.526563  398598 cli_runner.go:164] Run: docker container inspect functional-961919 --format={{.State.Status}}
I0120 15:11:34.551273  398598 ssh_runner.go:195] Run: systemctl --version
I0120 15:11:34.551330  398598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-961919
I0120 15:11:34.573057  398598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/functional-961919/id_rsa Username:docker}
I0120 15:11:34.677478  398598 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-961919 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-apiserver              | v1.32.0            | sha256:c2e17b | 28.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.0            | sha256:a389e1 | 20.7MB |
| docker.io/library/minikube-local-cache-test | functional-961919  | sha256:93521c | 991B   |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| localhost/my-image                          | functional-961919  | sha256:7352e8 | 775kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kicbase/echo-server               | functional-961919  | sha256:9056ab | 2.37MB |
| docker.io/library/nginx                     | alpine             | sha256:93f9c7 | 20.5MB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.32.0            | sha256:8cab3d | 26.3MB |
| registry.k8s.io/kube-proxy                  | v1.32.0            | sha256:040f9f | 30.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-961919 image ls --format table --alsologtostderr:
I0120 15:11:39.052802  400401 out.go:345] Setting OutFile to fd 1 ...
I0120 15:11:39.053094  400401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:39.053109  400401 out.go:358] Setting ErrFile to fd 2...
I0120 15:11:39.053116  400401 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:39.053309  400401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
I0120 15:11:39.053987  400401 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:39.054090  400401 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:39.054516  400401 cli_runner.go:164] Run: docker container inspect functional-961919 --format={{.State.Status}}
I0120 15:11:39.076526  400401 ssh_runner.go:195] Run: systemctl --version
I0120 15:11:39.076584  400401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-961919
I0120 15:11:39.096465  400401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/functional-961919/id_rsa Username:docker}
I0120 15:11:39.185525  400401 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-961919 image ls --format json --alsologtostderr:
[{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-961919"],"size":"2372971"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"26254834"},{"id":"sha256:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"siz
e":"20656471"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:93521c19194088ebd3b28f4b6797898789ee4595096110a9e14ff192f3c992d4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-961919"],"size":"991"},{"id":"sha256:c69fa2e9
cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"13790988
6"},{"id":"sha256:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3","repoDigests":["docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20534112"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"28670542"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511
a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:7352e827e1898086e66eb2f430f1fa35069bb1bac93445d20614fe431c201682","repoDigests":[],"repoTags":["localhost/my-image:functional-961919"],"size":"774889"},{"id":"sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"30906462"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-961919 image ls --format json --alsologtostderr:
I0120 15:11:38.801509  400283 out.go:345] Setting OutFile to fd 1 ...
I0120 15:11:38.801683  400283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:38.801699  400283 out.go:358] Setting ErrFile to fd 2...
I0120 15:11:38.801707  400283 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:38.802081  400283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
I0120 15:11:38.803184  400283 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:38.803522  400283 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:38.804135  400283 cli_runner.go:164] Run: docker container inspect functional-961919 --format={{.State.Status}}
I0120 15:11:38.822081  400283 ssh_runner.go:195] Run: systemctl --version
I0120 15:11:38.822144  400283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-961919
I0120 15:11:38.844631  400283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/functional-961919/id_rsa Username:docker}
I0120 15:11:38.942334  400283 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-961919 image ls --format yaml --alsologtostderr:
- id: sha256:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "28670542"
- id: sha256:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "20656471"
- id: sha256:93521c19194088ebd3b28f4b6797898789ee4595096110a9e14ff192f3c992d4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-961919
size: "991"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "26254834"
- id: sha256:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "30906462"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-961919
size: "2372971"
- id: sha256:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests:
- docker.io/library/nginx@sha256:814a8e88df978ade80e584cc5b333144b9372a8e3c98872d07137dbf3b44d0e4
repoTags:
- docker.io/library/nginx:alpine
size: "20534112"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-961919 image ls --format yaml --alsologtostderr:
I0120 15:11:34.820554  398714 out.go:345] Setting OutFile to fd 1 ...
I0120 15:11:34.820764  398714 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:34.820776  398714 out.go:358] Setting ErrFile to fd 2...
I0120 15:11:34.820783  398714 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:34.820969  398714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
I0120 15:11:34.821676  398714 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:34.821822  398714 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:34.822426  398714 cli_runner.go:164] Run: docker container inspect functional-961919 --format={{.State.Status}}
I0120 15:11:34.847394  398714 ssh_runner.go:195] Run: systemctl --version
I0120 15:11:34.847445  398714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-961919
I0120 15:11:34.868347  398714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/functional-961919/id_rsa Username:docker}
I0120 15:11:34.977283  398714 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh pgrep buildkitd: exit status 1 (293.835127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image build -t localhost/my-image:functional-961919 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 image build -t localhost/my-image:functional-961919 testdata/build --alsologtostderr: (3.151058906s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-961919 image build -t localhost/my-image:functional-961919 testdata/build --alsologtostderr:
I0120 15:11:35.404200  398992 out.go:345] Setting OutFile to fd 1 ...
I0120 15:11:35.405220  398992 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:35.405243  398992 out.go:358] Setting ErrFile to fd 2...
I0120 15:11:35.405251  398992 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:11:35.405604  398992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
I0120 15:11:35.406606  398992 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:35.407551  398992 config.go:182] Loaded profile config "functional-961919": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
I0120 15:11:35.407996  398992 cli_runner.go:164] Run: docker container inspect functional-961919 --format={{.State.Status}}
I0120 15:11:35.429306  398992 ssh_runner.go:195] Run: systemctl --version
I0120 15:11:35.429376  398992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-961919
I0120 15:11:35.458096  398992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/functional-961919/id_rsa Username:docker}
I0120 15:11:35.566318  398992 build_images.go:161] Building image from path: /tmp/build.3327063776.tar
I0120 15:11:35.566405  398992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0120 15:11:35.579885  398992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3327063776.tar
I0120 15:11:35.584003  398992 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3327063776.tar: stat -c "%s %y" /var/lib/minikube/build/build.3327063776.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3327063776.tar': No such file or directory
I0120 15:11:35.584032  398992 ssh_runner.go:362] scp /tmp/build.3327063776.tar --> /var/lib/minikube/build/build.3327063776.tar (3072 bytes)
I0120 15:11:35.611730  398992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3327063776
I0120 15:11:35.633561  398992 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3327063776 -xf /var/lib/minikube/build/build.3327063776.tar
I0120 15:11:35.643481  398992 containerd.go:394] Building image: /var/lib/minikube/build/build.3327063776
I0120 15:11:35.643571  398992 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3327063776 --local dockerfile=/var/lib/minikube/build/build.3327063776 --output type=image,name=localhost/my-image:functional-961919
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a7a7c2bcd13947c1800fdad84df794dc6645ccc2c9b50639aeff80614eca6fcc done
#8 exporting config sha256:7352e827e1898086e66eb2f430f1fa35069bb1bac93445d20614fe431c201682 done
#8 naming to localhost/my-image:functional-961919 done
#8 DONE 0.1s
I0120 15:11:38.469820  398992 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3327063776 --local dockerfile=/var/lib/minikube/build/build.3327063776 --output type=image,name=localhost/my-image:functional-961919: (2.826207979s)
I0120 15:11:38.469919  398992 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3327063776
I0120 15:11:38.479040  398992 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3327063776.tar
I0120 15:11:38.487696  398992 build_images.go:217] Built localhost/my-image:functional-961919 from /tmp/build.3327063776.tar
I0120 15:11:38.487740  398992 build_images.go:133] succeeded building to: functional-961919
I0120 15:11:38.487748  398992 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.535366329s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-961919
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-961919 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-961919 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-lsxt5" [011e6c9e-8b5e-4154-ab10-1e35f8ac04a0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-lsxt5" [011e6c9e-8b5e-4154-ab10-1e35f8ac04a0] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.014133607s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr] ...
E0120 15:11:07.213781  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:508: unable to kill pid 393266: os: process already finished
helpers_test.go:508: unable to kill pid 393090: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image load --daemon kicbase/echo-server:functional-961919 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 image load --daemon kicbase/echo-server:functional-961919 --alsologtostderr: (1.606219231s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-961919 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bc0275fc-2e02-4c71-ae9e-db2dc2b7d2cb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bc0275fc-2e02-4c71-ae9e-db2dc2b7d2cb] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.005483723s
I0120 15:11:25.635478  348924 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image load --daemon kicbase/echo-server:functional-961919 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-961919
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image load --daemon kicbase/echo-server:functional-961919 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-961919 image load --daemon kicbase/echo-server:functional-961919 --alsologtostderr: (1.074423969s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image save kicbase/echo-server:functional-961919 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image rm kicbase/echo-server:functional-961919 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-961919
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 image save --daemon kicbase/echo-server:functional-961919 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-961919
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service list -o json
functional_test.go:1494: Took "510.928778ms" to run "out/minikube-linux-amd64 -p functional-961919 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30658
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30658
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-961919 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.95.91 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-961919 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "322.015841ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "67.496012ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdany-port4038017215/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737385887491929147" to /tmp/TestFunctionalparallelMountCmdany-port4038017215/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737385887491929147" to /tmp/TestFunctionalparallelMountCmdany-port4038017215/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737385887491929147" to /tmp/TestFunctionalparallelMountCmdany-port4038017215/001/test-1737385887491929147
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.638356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 15:11:27.781910  348924 retry.go:31] will retry after 543.791404ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 20 15:11 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 20 15:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 20 15:11 test-1737385887491929147
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh cat /mount-9p/test-1737385887491929147
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-961919 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5e774f5e-4bd4-4584-8720-e310a526f76b] Pending
helpers_test.go:344: "busybox-mount" [5e774f5e-4bd4-4584-8720-e310a526f76b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5e774f5e-4bd4-4584-8720-e310a526f76b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5e774f5e-4bd4-4584-8720-e310a526f76b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004476844s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-961919 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdany-port4038017215/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "350.276186ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "51.291395ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdspecific-port547312353/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.568443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 15:11:35.741467  348924 retry.go:31] will retry after 523.696651ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdspecific-port547312353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "sudo umount -f /mount-9p": exit status 1 (292.281997ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-961919 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdspecific-port547312353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T" /mount1: exit status 1 (372.474092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0120 15:11:37.830806  348924 retry.go:31] will retry after 711.344436ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-961919 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-961919 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-961919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1116464626/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2025/01/20 15:11:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-961919
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-961919
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-961919
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (94.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-941476 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0120 15:13:10.097449  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-941476 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m33.712451296s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (94.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-941476 -- rollout status deployment/busybox: (3.352824425s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-grrxk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-mppqj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-wvkgz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-grrxk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-mppqj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-wvkgz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-grrxk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-mppqj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-wvkgz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-grrxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-grrxk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-mppqj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-mppqj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-wvkgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-941476 -- exec busybox-58667487b6-wvkgz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-941476 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-941476 -v=7 --alsologtostderr: (20.932003146s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-941476 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp testdata/cp-test.txt ha-941476:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837934033/001/cp-test_ha-941476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476:/home/docker/cp-test.txt ha-941476-m02:/home/docker/cp-test_ha-941476_ha-941476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test_ha-941476_ha-941476-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476:/home/docker/cp-test.txt ha-941476-m03:/home/docker/cp-test_ha-941476_ha-941476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test_ha-941476_ha-941476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476:/home/docker/cp-test.txt ha-941476-m04:/home/docker/cp-test_ha-941476_ha-941476-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test_ha-941476_ha-941476-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp testdata/cp-test.txt ha-941476-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837934033/001/cp-test_ha-941476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m02:/home/docker/cp-test.txt ha-941476:/home/docker/cp-test_ha-941476-m02_ha-941476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test_ha-941476-m02_ha-941476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m02:/home/docker/cp-test.txt ha-941476-m03:/home/docker/cp-test_ha-941476-m02_ha-941476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test_ha-941476-m02_ha-941476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m02:/home/docker/cp-test.txt ha-941476-m04:/home/docker/cp-test_ha-941476-m02_ha-941476-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test_ha-941476-m02_ha-941476-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp testdata/cp-test.txt ha-941476-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837934033/001/cp-test_ha-941476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m03:/home/docker/cp-test.txt ha-941476:/home/docker/cp-test_ha-941476-m03_ha-941476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test_ha-941476-m03_ha-941476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m03:/home/docker/cp-test.txt ha-941476-m02:/home/docker/cp-test_ha-941476-m03_ha-941476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test_ha-941476-m03_ha-941476-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m03:/home/docker/cp-test.txt ha-941476-m04:/home/docker/cp-test_ha-941476-m03_ha-941476-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test_ha-941476-m03_ha-941476-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp testdata/cp-test.txt ha-941476-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2837934033/001/cp-test_ha-941476-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m04:/home/docker/cp-test.txt ha-941476:/home/docker/cp-test_ha-941476-m04_ha-941476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476 "sudo cat /home/docker/cp-test_ha-941476-m04_ha-941476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m04:/home/docker/cp-test.txt ha-941476-m02:/home/docker/cp-test_ha-941476-m04_ha-941476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m02 "sudo cat /home/docker/cp-test_ha-941476-m04_ha-941476-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 cp ha-941476-m04:/home/docker/cp-test.txt ha-941476-m03:/home/docker/cp-test_ha-941476-m04_ha-941476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 ssh -n ha-941476-m03 "sudo cat /home/docker/cp-test_ha-941476-m04_ha-941476-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-941476 node stop m02 -v=7 --alsologtostderr: (11.866900109s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr: exit status 7 (670.790236ms)

                                                
                                                
-- stdout --
	ha-941476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-941476-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-941476-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-941476-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:14:21.874220  421323 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:14:21.874333  421323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:14:21.874341  421323 out.go:358] Setting ErrFile to fd 2...
	I0120 15:14:21.874345  421323 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:14:21.874553  421323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 15:14:21.874756  421323 out.go:352] Setting JSON to false
	I0120 15:14:21.874790  421323 mustload.go:65] Loading cluster: ha-941476
	I0120 15:14:21.874907  421323 notify.go:220] Checking for updates...
	I0120 15:14:21.875217  421323 config.go:182] Loaded profile config "ha-941476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 15:14:21.875238  421323 status.go:174] checking status of ha-941476 ...
	I0120 15:14:21.875684  421323 cli_runner.go:164] Run: docker container inspect ha-941476 --format={{.State.Status}}
	I0120 15:14:21.897682  421323 status.go:371] ha-941476 host status = "Running" (err=<nil>)
	I0120 15:14:21.897720  421323 host.go:66] Checking if "ha-941476" exists ...
	I0120 15:14:21.898016  421323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-941476
	I0120 15:14:21.916696  421323 host.go:66] Checking if "ha-941476" exists ...
	I0120 15:14:21.916986  421323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:14:21.917024  421323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-941476
	I0120 15:14:21.935431  421323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/ha-941476/id_rsa Username:docker}
	I0120 15:14:22.029186  421323 ssh_runner.go:195] Run: systemctl --version
	I0120 15:14:22.033138  421323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:14:22.043324  421323 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 15:14:22.095067  421323 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-20 15:14:22.084005086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 15:14:22.095826  421323 kubeconfig.go:125] found "ha-941476" server: "https://192.168.49.254:8443"
	I0120 15:14:22.095866  421323 api_server.go:166] Checking apiserver status ...
	I0120 15:14:22.095911  421323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:14:22.107686  421323 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I0120 15:14:22.118080  421323 api_server.go:182] apiserver freezer: "3:freezer:/docker/2114555afd225dcc5124a1ce7c4d59b94b177fe4d5d2bf22673caecb404b532c/kubepods/burstable/pod93a443d2b2683f46e6d70d10cbcbcef6/55ff929341e118c48fc822d102b9ea003b80cd86efc5ae8cb01310aeba74c7a5"
	I0120 15:14:22.118176  421323 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2114555afd225dcc5124a1ce7c4d59b94b177fe4d5d2bf22673caecb404b532c/kubepods/burstable/pod93a443d2b2683f46e6d70d10cbcbcef6/55ff929341e118c48fc822d102b9ea003b80cd86efc5ae8cb01310aeba74c7a5/freezer.state
	I0120 15:14:22.126712  421323 api_server.go:204] freezer state: "THAWED"
	I0120 15:14:22.126741  421323 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 15:14:22.130820  421323 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 15:14:22.130843  421323 status.go:463] ha-941476 apiserver status = Running (err=<nil>)
	I0120 15:14:22.130853  421323 status.go:176] ha-941476 status: &{Name:ha-941476 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:14:22.130872  421323 status.go:174] checking status of ha-941476-m02 ...
	I0120 15:14:22.131136  421323 cli_runner.go:164] Run: docker container inspect ha-941476-m02 --format={{.State.Status}}
	I0120 15:14:22.150046  421323 status.go:371] ha-941476-m02 host status = "Stopped" (err=<nil>)
	I0120 15:14:22.150065  421323 status.go:384] host is not running, skipping remaining checks
	I0120 15:14:22.150072  421323 status.go:176] ha-941476-m02 status: &{Name:ha-941476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:14:22.150090  421323 status.go:174] checking status of ha-941476-m03 ...
	I0120 15:14:22.150348  421323 cli_runner.go:164] Run: docker container inspect ha-941476-m03 --format={{.State.Status}}
	I0120 15:14:22.167282  421323 status.go:371] ha-941476-m03 host status = "Running" (err=<nil>)
	I0120 15:14:22.167305  421323 host.go:66] Checking if "ha-941476-m03" exists ...
	I0120 15:14:22.167542  421323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-941476-m03
	I0120 15:14:22.185898  421323 host.go:66] Checking if "ha-941476-m03" exists ...
	I0120 15:14:22.186182  421323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:14:22.186229  421323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-941476-m03
	I0120 15:14:22.204036  421323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/ha-941476-m03/id_rsa Username:docker}
	I0120 15:14:22.293511  421323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:14:22.304724  421323 kubeconfig.go:125] found "ha-941476" server: "https://192.168.49.254:8443"
	I0120 15:14:22.304750  421323 api_server.go:166] Checking apiserver status ...
	I0120 15:14:22.304785  421323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 15:14:22.314665  421323 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1478/cgroup
	I0120 15:14:22.323198  421323 api_server.go:182] apiserver freezer: "3:freezer:/docker/d1502f3922345f5087f4a09339f5635d1d1958c5e55cfcc33be25d44a4254a50/kubepods/burstable/pod87c679e816604cf057ace6129c3438e3/42521a3d642ce257cd249786a5cf21d3929159ec865e34576649ac45c0151510"
	I0120 15:14:22.323253  421323 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d1502f3922345f5087f4a09339f5635d1d1958c5e55cfcc33be25d44a4254a50/kubepods/burstable/pod87c679e816604cf057ace6129c3438e3/42521a3d642ce257cd249786a5cf21d3929159ec865e34576649ac45c0151510/freezer.state
	I0120 15:14:22.331109  421323 api_server.go:204] freezer state: "THAWED"
	I0120 15:14:22.331150  421323 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0120 15:14:22.335259  421323 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0120 15:14:22.335286  421323 status.go:463] ha-941476-m03 apiserver status = Running (err=<nil>)
	I0120 15:14:22.335297  421323 status.go:176] ha-941476-m03 status: &{Name:ha-941476-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:14:22.335319  421323 status.go:174] checking status of ha-941476-m04 ...
	I0120 15:14:22.335555  421323 cli_runner.go:164] Run: docker container inspect ha-941476-m04 --format={{.State.Status}}
	I0120 15:14:22.354220  421323 status.go:371] ha-941476-m04 host status = "Running" (err=<nil>)
	I0120 15:14:22.354244  421323 host.go:66] Checking if "ha-941476-m04" exists ...
	I0120 15:14:22.354503  421323 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-941476-m04
	I0120 15:14:22.372198  421323 host.go:66] Checking if "ha-941476-m04" exists ...
	I0120 15:14:22.372473  421323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 15:14:22.372521  421323 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-941476-m04
	I0120 15:14:22.389779  421323 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/ha-941476-m04/id_rsa Username:docker}
	I0120 15:14:22.481325  421323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 15:14:22.492014  421323 status.go:176] ha-941476-m04 status: &{Name:ha-941476-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-941476 node start m02 -v=7 --alsologtostderr: (14.450782129s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-941476 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-941476 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-941476 -v=7 --alsologtostderr: (36.751782699s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-941476 --wait=true -v=7 --alsologtostderr
E0120 15:15:26.231360  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:15:53.938816  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.152379  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.158775  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.170171  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.191556  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.233003  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.314483  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.476789  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:05.798240  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:06.440451  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:07.722543  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:10.284124  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:15.406304  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 15:16:25.648024  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-941476 --wait=true -v=7 --alsologtostderr: (1m25.957601724s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-941476
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 node delete m03 -v=7 --alsologtostderr
E0120 15:16:46.129980  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-941476 node delete m03 -v=7 --alsologtostderr: (8.430778832s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 stop -v=7 --alsologtostderr
E0120 15:17:27.091936  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-941476 stop -v=7 --alsologtostderr: (35.664029894s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr: exit status 7 (109.325452ms)

                                                
                                                
-- stdout --
	ha-941476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-941476-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-941476-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 15:17:27.812849  438395 out.go:345] Setting OutFile to fd 1 ...
	I0120 15:17:27.812982  438395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:17:27.812994  438395 out.go:358] Setting ErrFile to fd 2...
	I0120 15:17:27.812999  438395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 15:17:27.813200  438395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 15:17:27.813429  438395 out.go:352] Setting JSON to false
	I0120 15:17:27.813469  438395 mustload.go:65] Loading cluster: ha-941476
	I0120 15:17:27.813504  438395 notify.go:220] Checking for updates...
	I0120 15:17:27.813938  438395 config.go:182] Loaded profile config "ha-941476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 15:17:27.813965  438395 status.go:174] checking status of ha-941476 ...
	I0120 15:17:27.814443  438395 cli_runner.go:164] Run: docker container inspect ha-941476 --format={{.State.Status}}
	I0120 15:17:27.833361  438395 status.go:371] ha-941476 host status = "Stopped" (err=<nil>)
	I0120 15:17:27.833384  438395 status.go:384] host is not running, skipping remaining checks
	I0120 15:17:27.833391  438395 status.go:176] ha-941476 status: &{Name:ha-941476 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:17:27.833433  438395 status.go:174] checking status of ha-941476-m02 ...
	I0120 15:17:27.833779  438395 cli_runner.go:164] Run: docker container inspect ha-941476-m02 --format={{.State.Status}}
	I0120 15:17:27.851392  438395 status.go:371] ha-941476-m02 host status = "Stopped" (err=<nil>)
	I0120 15:17:27.851414  438395 status.go:384] host is not running, skipping remaining checks
	I0120 15:17:27.851422  438395 status.go:176] ha-941476-m02 status: &{Name:ha-941476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 15:17:27.851446  438395 status.go:174] checking status of ha-941476-m04 ...
	I0120 15:17:27.851681  438395 cli_runner.go:164] Run: docker container inspect ha-941476-m04 --format={{.State.Status}}
	I0120 15:17:27.868758  438395 status.go:371] ha-941476-m04 host status = "Stopped" (err=<nil>)
	I0120 15:17:27.868806  438395 status.go:384] host is not running, skipping remaining checks
	I0120 15:17:27.868818  438395 status.go:176] ha-941476-m04 status: &{Name:ha-941476-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-941476 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-941476 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m20.135319826s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0120 15:18:49.013846  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-941476 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-941476 --control-plane -v=7 --alsologtostderr: (38.898320277s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-941476 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-410327 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-410327 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.584898ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f372a439-d788-43d7-b3f2-0902232efe5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410327] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8144af1c-8bae-435e-93e4-be747a06f9ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"4f0946c3-6731-4f0d-a210-7f4a8bd22688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8b3b0a54-6343-4f65-bfc2-4d61a62f7d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig"}}
	{"specversion":"1.0","id":"6b2c9fdd-75ac-48fa-a1ef-b7059fbba12e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube"}}
	{"specversion":"1.0","id":"1d45b719-d58b-43a4-88e8-2c8cf7759e34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"074c310e-dfae-4644-a50c-5cc4c4faf2fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"72cc7a16-be79-4efe-a4ae-b9be21f8d70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-410327
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-362130 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-362130 --network=: (25.394928919s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-362130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-362130
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-362130: (2.096179548s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.51s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-437704 --network=bridge
E0120 16:00:09.306394  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:00:26.230870  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-437704 --network=bridge: (23.052586938s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-437704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-437704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-437704: (1.905666972s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.98s)

                                                
                                    
x
+
TestKicExistingNetwork (25.12s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0120 16:00:32.626729  348924 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0120 16:00:32.644177  348924 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0120 16:00:32.644275  348924 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0120 16:00:32.644305  348924 cli_runner.go:164] Run: docker network inspect existing-network
W0120 16:00:32.660916  348924 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0120 16:00:32.660955  348924 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0120 16:00:32.660968  348924 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0120 16:00:32.661087  348924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0120 16:00:32.679186  348924 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2f80cc0228cb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8e:74:6b:19} reservation:<nil>}
I0120 16:00:32.679769  348924 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001529890}
I0120 16:00:32.679805  348924 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0120 16:00:32.679858  348924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0120 16:00:32.742350  348924 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-377131 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-377131 --network=existing-network: (23.036266955s)
helpers_test.go:175: Cleaning up "existing-network-377131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-377131
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-377131: (1.932682967s)
I0120 16:00:57.728537  348924 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.12s)

                                                
                                    
x
+
TestKicCustomSubnet (25.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-062555 --subnet=192.168.60.0/24
E0120 16:01:05.158421  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-062555 --subnet=192.168.60.0/24: (23.558984697s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-062555 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-062555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-062555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-062555: (2.014675631s)
--- PASS: TestKicCustomSubnet (25.59s)

                                                
                                    
x
+
TestKicStaticIP (25.82s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-339704 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-339704 --static-ip=192.168.200.200: (23.58662083s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-339704 ip
helpers_test.go:175: Cleaning up "static-ip-339704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-339704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-339704: (2.099891415s)
--- PASS: TestKicStaticIP (25.82s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (53.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-314642 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-314642 --driver=docker  --container-runtime=containerd: (24.006999929s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-338470 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-338470 --driver=docker  --container-runtime=containerd: (23.996134935s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-314642
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-338470
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-338470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-338470
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-338470: (1.878915262s)
helpers_test.go:175: Cleaning up "first-314642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-314642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-314642: (2.166223153s)
--- PASS: TestMinikubeProfile (53.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-573084 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-573084 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.382597659s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-573084 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-590730 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-590730 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.19383098s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-590730 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-573084 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-573084 --alsologtostderr -v=5: (1.580500668s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-590730 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-590730
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-590730: (1.171962755s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-590730
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-590730: (5.918298006s)
--- PASS: TestMountStart/serial/RestartStopped (6.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-590730 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-554574 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-554574 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (58.238145446s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-554574 -- rollout status deployment/busybox: (14.248635412s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-8rshw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-mfdgz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-8rshw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-mfdgz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-8rshw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-mfdgz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-8rshw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-8rshw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-mfdgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-554574 -- exec busybox-58667487b6-mfdgz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-554574 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-554574 -v 3 --alsologtostderr: (16.643975917s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-554574 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp testdata/cp-test.txt multinode-554574:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1106427915/001/cp-test_multinode-554574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574:/home/docker/cp-test.txt multinode-554574-m02:/home/docker/cp-test_multinode-554574_multinode-554574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test_multinode-554574_multinode-554574-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574:/home/docker/cp-test.txt multinode-554574-m03:/home/docker/cp-test_multinode-554574_multinode-554574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test_multinode-554574_multinode-554574-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp testdata/cp-test.txt multinode-554574-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1106427915/001/cp-test_multinode-554574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m02:/home/docker/cp-test.txt multinode-554574:/home/docker/cp-test_multinode-554574-m02_multinode-554574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test_multinode-554574-m02_multinode-554574.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m02:/home/docker/cp-test.txt multinode-554574-m03:/home/docker/cp-test_multinode-554574-m02_multinode-554574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test_multinode-554574-m02_multinode-554574-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp testdata/cp-test.txt multinode-554574-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1106427915/001/cp-test_multinode-554574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m03:/home/docker/cp-test.txt multinode-554574:/home/docker/cp-test_multinode-554574-m03_multinode-554574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574 "sudo cat /home/docker/cp-test_multinode-554574-m03_multinode-554574.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 cp multinode-554574-m03:/home/docker/cp-test.txt multinode-554574-m02:/home/docker/cp-test_multinode-554574-m03_multinode-554574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 ssh -n multinode-554574-m02 "sudo cat /home/docker/cp-test_multinode-554574-m03_multinode-554574-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-554574 node stop m03: (1.184186964s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-554574 status: exit status 7 (477.427723ms)

                                                
                                                
-- stdout --
	multinode-554574
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-554574-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-554574-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr: exit status 7 (471.21727ms)

                                                
                                                
-- stdout --
	multinode-554574
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-554574-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-554574-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:04:52.345217  511337 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:04:52.345349  511337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:04:52.345358  511337 out.go:358] Setting ErrFile to fd 2...
	I0120 16:04:52.345362  511337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:04:52.345560  511337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 16:04:52.345732  511337 out.go:352] Setting JSON to false
	I0120 16:04:52.345767  511337 mustload.go:65] Loading cluster: multinode-554574
	I0120 16:04:52.345853  511337 notify.go:220] Checking for updates...
	I0120 16:04:52.346288  511337 config.go:182] Loaded profile config "multinode-554574": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 16:04:52.346320  511337 status.go:174] checking status of multinode-554574 ...
	I0120 16:04:52.346797  511337 cli_runner.go:164] Run: docker container inspect multinode-554574 --format={{.State.Status}}
	I0120 16:04:52.365258  511337 status.go:371] multinode-554574 host status = "Running" (err=<nil>)
	I0120 16:04:52.365290  511337 host.go:66] Checking if "multinode-554574" exists ...
	I0120 16:04:52.365548  511337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-554574
	I0120 16:04:52.382966  511337 host.go:66] Checking if "multinode-554574" exists ...
	I0120 16:04:52.383278  511337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 16:04:52.383318  511337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-554574
	I0120 16:04:52.403346  511337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/multinode-554574/id_rsa Username:docker}
	I0120 16:04:52.497347  511337 ssh_runner.go:195] Run: systemctl --version
	I0120 16:04:52.501426  511337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:04:52.512352  511337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:04:52.560101  511337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-01-20 16:04:52.550541336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 16:04:52.560708  511337 kubeconfig.go:125] found "multinode-554574" server: "https://192.168.67.2:8443"
	I0120 16:04:52.560738  511337 api_server.go:166] Checking apiserver status ...
	I0120 16:04:52.560769  511337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0120 16:04:52.571890  511337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	I0120 16:04:52.580806  511337 api_server.go:182] apiserver freezer: "3:freezer:/docker/30e74d77b98534ca04e75f6169ee9d810c087a3fb6909bc3c479a3e4304e912b/kubepods/burstable/podb6224693060553c19cf2c6f8601877b9/ef60ff64aab3173fa82de1cb0a17c5a5c9dd6a4c70eafbd3dae744f635ddaa4e"
	I0120 16:04:52.580883  511337 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/30e74d77b98534ca04e75f6169ee9d810c087a3fb6909bc3c479a3e4304e912b/kubepods/burstable/podb6224693060553c19cf2c6f8601877b9/ef60ff64aab3173fa82de1cb0a17c5a5c9dd6a4c70eafbd3dae744f635ddaa4e/freezer.state
	I0120 16:04:52.588912  511337 api_server.go:204] freezer state: "THAWED"
	I0120 16:04:52.588937  511337 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0120 16:04:52.592942  511337 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0120 16:04:52.592967  511337 status.go:463] multinode-554574 apiserver status = Running (err=<nil>)
	I0120 16:04:52.592977  511337 status.go:176] multinode-554574 status: &{Name:multinode-554574 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:04:52.592994  511337 status.go:174] checking status of multinode-554574-m02 ...
	I0120 16:04:52.593238  511337 cli_runner.go:164] Run: docker container inspect multinode-554574-m02 --format={{.State.Status}}
	I0120 16:04:52.611449  511337 status.go:371] multinode-554574-m02 host status = "Running" (err=<nil>)
	I0120 16:04:52.611478  511337 host.go:66] Checking if "multinode-554574-m02" exists ...
	I0120 16:04:52.611732  511337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-554574-m02
	I0120 16:04:52.628982  511337 host.go:66] Checking if "multinode-554574-m02" exists ...
	I0120 16:04:52.629312  511337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0120 16:04:52.629363  511337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-554574-m02
	I0120 16:04:52.646573  511337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20109-341858/.minikube/machines/multinode-554574-m02/id_rsa Username:docker}
	I0120 16:04:52.736924  511337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0120 16:04:52.747142  511337 status.go:176] multinode-554574-m02 status: &{Name:multinode-554574-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:04:52.747182  511337 status.go:174] checking status of multinode-554574-m03 ...
	I0120 16:04:52.747460  511337 cli_runner.go:164] Run: docker container inspect multinode-554574-m03 --format={{.State.Status}}
	I0120 16:04:52.764596  511337 status.go:371] multinode-554574-m03 host status = "Stopped" (err=<nil>)
	I0120 16:04:52.764621  511337 status.go:384] host is not running, skipping remaining checks
	I0120 16:04:52.764637  511337 status.go:176] multinode-554574-m03 status: &{Name:multinode-554574-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-554574 node start m03 -v=7 --alsologtostderr: (7.874003046s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-554574
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-554574
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-554574: (24.705932371s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-554574 --wait=true -v=8 --alsologtostderr
E0120 16:05:26.231185  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:05:48.226399  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:06:05.151828  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-554574 --wait=true -v=8 --alsologtostderr: (56.634220753s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-554574
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-554574 node delete m03: (4.411797657s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-554574 stop: (23.644611785s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-554574 status: exit status 7 (91.994959ms)

                                                
                                                
-- stdout --
	multinode-554574
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-554574-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr: exit status 7 (88.417964ms)

                                                
                                                
-- stdout --
	multinode-554574
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-554574-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:06:51.509209  521076 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:06:51.509459  521076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:06:51.509468  521076 out.go:358] Setting ErrFile to fd 2...
	I0120 16:06:51.509472  521076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:06:51.509690  521076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 16:06:51.509870  521076 out.go:352] Setting JSON to false
	I0120 16:06:51.509905  521076 mustload.go:65] Loading cluster: multinode-554574
	I0120 16:06:51.509978  521076 notify.go:220] Checking for updates...
	I0120 16:06:51.510344  521076 config.go:182] Loaded profile config "multinode-554574": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 16:06:51.510365  521076 status.go:174] checking status of multinode-554574 ...
	I0120 16:06:51.510816  521076 cli_runner.go:164] Run: docker container inspect multinode-554574 --format={{.State.Status}}
	I0120 16:06:51.528967  521076 status.go:371] multinode-554574 host status = "Stopped" (err=<nil>)
	I0120 16:06:51.528995  521076 status.go:384] host is not running, skipping remaining checks
	I0120 16:06:51.529003  521076 status.go:176] multinode-554574 status: &{Name:multinode-554574 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0120 16:06:51.529034  521076 status.go:174] checking status of multinode-554574-m02 ...
	I0120 16:06:51.529371  521076 cli_runner.go:164] Run: docker container inspect multinode-554574-m02 --format={{.State.Status}}
	I0120 16:06:51.546338  521076 status.go:371] multinode-554574-m02 host status = "Stopped" (err=<nil>)
	I0120 16:06:51.546384  521076 status.go:384] host is not running, skipping remaining checks
	I0120 16:06:51.546397  521076 status.go:176] multinode-554574-m02 status: &{Name:multinode-554574-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-554574 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-554574 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.117549793s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-554574 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-554574
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-554574-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-554574-m02 --driver=docker  --container-runtime=containerd: exit status 14 (68.97598ms)

                                                
                                                
-- stdout --
	* [multinode-554574-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-554574-m02' is duplicated with machine name 'multinode-554574-m02' in profile 'multinode-554574'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-554574-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-554574-m03 --driver=docker  --container-runtime=containerd: (20.573653747s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-554574
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-554574: exit status 80 (278.881039ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-554574 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-554574-m03 already exists in multinode-554574-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-554574-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-554574-m03: (1.861021075s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.84s)

                                                
                                    
x
+
TestPreload (107.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-076824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-076824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.312761199s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-076824 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-076824 image pull gcr.io/k8s-minikube/busybox: (1.71315327s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-076824
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-076824: (11.90983708s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-076824 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-076824 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (17.042287257s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-076824 image list
helpers_test.go:175: Cleaning up "test-preload-076824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-076824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-076824: (2.453392763s)
--- PASS: TestPreload (107.67s)

                                                
                                    
x
+
TestScheduledStopUnix (98.67s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-221894 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-221894 --memory=2048 --driver=docker  --container-runtime=containerd: (23.016174542s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221894 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-221894 -n scheduled-stop-221894
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0120 16:10:21.127380  348924 retry.go:31] will retry after 81.564µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.128543  348924 retry.go:31] will retry after 149.367µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.129689  348924 retry.go:31] will retry after 148.548µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.130835  348924 retry.go:31] will retry after 447.417µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.131987  348924 retry.go:31] will retry after 271.99µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.133153  348924 retry.go:31] will retry after 729.154µs: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.134291  348924 retry.go:31] will retry after 1.658976ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.136502  348924 retry.go:31] will retry after 1.63039ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.138732  348924 retry.go:31] will retry after 3.203321ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.142938  348924 retry.go:31] will retry after 3.01279ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.146062  348924 retry.go:31] will retry after 8.056276ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.154225  348924 retry.go:31] will retry after 6.478998ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.161514  348924 retry.go:31] will retry after 11.460591ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.173743  348924 retry.go:31] will retry after 15.142593ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.190012  348924 retry.go:31] will retry after 19.41033ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
I0120 16:10:21.210318  348924 retry.go:31] will retry after 54.093238ms: open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/scheduled-stop-221894/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221894 --cancel-scheduled
E0120 16:10:26.231412  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221894 -n scheduled-stop-221894
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221894
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-221894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0120 16:11:05.151525  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-221894
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-221894: exit status 7 (72.382855ms)

                                                
                                                
-- stdout --
	scheduled-stop-221894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221894 -n scheduled-stop-221894
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-221894 -n scheduled-stop-221894: exit status 7 (69.307457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-221894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-221894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-221894: (4.259068415s)
--- PASS: TestScheduledStopUnix (98.67s)

                                                
                                    
x
+
TestInsufficientStorage (9.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-544703 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-544703 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.548437774s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"064f076a-02e5-4082-848a-1d6039ff43f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-544703] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0eafee60-9c89-4e44-be04-2ef29c34fe00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20109"}}
	{"specversion":"1.0","id":"747ba6db-3cdc-4fdf-9b98-3a2dddf08734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbf9e148-030d-4408-8489-2ceba040917c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig"}}
	{"specversion":"1.0","id":"472792f7-b4f0-4820-81a0-96ae21730743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube"}}
	{"specversion":"1.0","id":"1f437a1e-8140-4bec-9b15-51212ef0e6d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0772f23f-9c90-4268-8efc-a11747e84894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4518c8d2-6c66-4c26-af57-dc9e3c757e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f24d7c39-6887-4779-bfd7-664690420452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"33901ebe-2b2b-4cdf-b9cf-43b11ecbae38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ad4d115-b05f-4658-b7ac-50b0319c2836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"109cb8e2-337e-4621-9f16-e6d90826d550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-544703\" primary control-plane node in \"insufficient-storage-544703\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6ba5e56-6f30-4de0-b65a-13939fd68d39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"19e35d22-2ed5-4823-a1c8-d2844f5abc0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8754ad36-88ec-4d1d-b9e7-347576ae35a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-544703 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-544703 --output=json --layout=cluster: exit status 7 (264.930651ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-544703","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-544703","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:11:44.167541  544131 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-544703" does not appear in /home/jenkins/minikube-integration/20109-341858/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-544703 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-544703 --output=json --layout=cluster: exit status 7 (278.310104ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-544703","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-544703","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0120 16:11:44.446637  544230 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-544703" does not appear in /home/jenkins/minikube-integration/20109-341858/kubeconfig
	E0120 16:11:44.456775  544230 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/insufficient-storage-544703/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-544703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-544703
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-544703: (1.831845075s)
--- PASS: TestInsufficientStorage (9.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (74.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2105801602 start -p running-upgrade-759123 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2105801602 start -p running-upgrade-759123 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (30.429826332s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-759123 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-759123 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.518053703s)
helpers_test.go:175: Cleaning up "running-upgrade-759123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-759123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-759123: (7.108173174s)
--- PASS: TestRunningBinaryUpgrade (74.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (322.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.934965359s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-183786
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-183786: (4.602742038s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-183786 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-183786 status --format={{.Host}}: exit status 7 (96.464657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m29.79332326s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-183786 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.466184ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-183786] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-183786
	    minikube start -p kubernetes-upgrade-183786 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1837862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-183786 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-183786 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.285501249s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-183786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-183786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-183786: (2.566603651s)
--- PASS: TestKubernetesUpgrade (322.45s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.75s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2274430400 start -p missing-upgrade-363307 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2274430400 start -p missing-upgrade-363307 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.24077096s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-363307
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-363307: (13.047907402s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-363307
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-363307 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-363307 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.798308848s)
helpers_test.go:175: Cleaning up "missing-upgrade-363307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-363307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-363307: (2.192012443s)
--- PASS: TestMissingContainerUpgrade (160.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (80.137588ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-521728] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-521728 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-521728 --driver=docker  --container-runtime=containerd: (35.319636474s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-521728 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.01167667s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-521728 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-521728 status -o json: exit status 2 (296.87873ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-521728","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-521728
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-521728: (1.923980914s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-521728 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.676520267s)
--- PASS: TestNoKubernetes/serial/Start (6.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-521728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-521728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.946592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.388482703s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-521728
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-521728: (2.792483891s)
--- PASS: TestNoKubernetes/serial/Stop (2.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-521728 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-521728 --driver=docker  --container-runtime=containerd: (6.097387666s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-521728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-521728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.593479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3809200072 start -p stopped-upgrade-220453 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3809200072 start -p stopped-upgrade-220453 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (35.09601516s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3809200072 -p stopped-upgrade-220453 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3809200072 -p stopped-upgrade-220453 stop: (19.815318827s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-220453 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-220453 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.107883488s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-920295 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-920295 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (150.831657ms)

                                                
                                                
-- stdout --
	* [false-920295] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20109
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0120 16:14:30.167445  586144 out.go:345] Setting OutFile to fd 1 ...
	I0120 16:14:30.167555  586144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:14:30.167563  586144 out.go:358] Setting ErrFile to fd 2...
	I0120 16:14:30.167567  586144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0120 16:14:30.167756  586144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-341858/.minikube/bin
	I0120 16:14:30.168406  586144 out.go:352] Setting JSON to false
	I0120 16:14:30.170525  586144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":21416,"bootTime":1737368254,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0120 16:14:30.170605  586144 start.go:139] virtualization: kvm guest
	I0120 16:14:30.173058  586144 out.go:177] * [false-920295] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0120 16:14:30.174367  586144 out.go:177]   - MINIKUBE_LOCATION=20109
	I0120 16:14:30.174406  586144 notify.go:220] Checking for updates...
	I0120 16:14:30.177267  586144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0120 16:14:30.178683  586144 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20109-341858/kubeconfig
	I0120 16:14:30.179923  586144 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-341858/.minikube
	I0120 16:14:30.181152  586144 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0120 16:14:30.182355  586144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0120 16:14:30.184132  586144 config.go:182] Loaded profile config "cert-expiration-157019": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 16:14:30.184254  586144 config.go:182] Loaded profile config "kubernetes-upgrade-183786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I0120 16:14:30.184355  586144 config.go:182] Loaded profile config "stopped-upgrade-220453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0120 16:14:30.184481  586144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0120 16:14:30.208007  586144 docker.go:123] docker version: linux-27.5.0:Docker Engine - Community
	I0120 16:14:30.208125  586144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0120 16:14:30.258483  586144 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-01-20 16:14:30.248857355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1074-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-7 Labels:[] ExperimentalBuild:false ServerVersion:27.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.3] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0120 16:14:30.258638  586144 docker.go:318] overlay module found
	I0120 16:14:30.261119  586144 out.go:177] * Using the docker driver based on user configuration
	I0120 16:14:30.262207  586144 start.go:297] selected driver: docker
	I0120 16:14:30.262223  586144 start.go:901] validating driver "docker" against <nil>
	I0120 16:14:30.262238  586144 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0120 16:14:30.264433  586144 out.go:201] 
	W0120 16:14:30.265584  586144 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0120 16:14:30.266589  586144 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-920295 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-157019
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:14:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183786
contexts:
- context:
cluster: cert-expiration-157019
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-157019
name: cert-expiration-157019
- context:
cluster: kubernetes-upgrade-183786
user: kubernetes-upgrade-183786
name: kubernetes-upgrade-183786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-157019
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.key
- name: kubernetes-upgrade-183786
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-920295

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920295"

                                                
                                                
----------------------- debugLogs end: false-920295 [took: 2.850962626s] --------------------------------
helpers_test.go:175: Cleaning up "false-920295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-920295
--- PASS: TestNetworkPlugins/group/false (3.16s)

                                                
                                    
x
+
TestPause/serial/Start (43.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-488509 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-488509 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (43.703505487s)
--- PASS: TestPause/serial/Start (43.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-488509 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0120 16:15:26.230494  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-488509 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.82876179s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-488509 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-488509 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-488509 --output=json --layout=cluster: exit status 2 (317.781549ms)

                                                
                                                
-- stdout --
	{"Name":"pause-488509","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-488509","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-488509 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-488509 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-488509 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-488509 --alsologtostderr -v=5: (4.837682029s)
--- PASS: TestPause/serial/DeletePaused (4.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-220453
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.88s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.824231128s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-488509
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-488509: exit status 1 (19.283591ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-488509: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (43.682432911s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (43.063975144s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (53.647122839s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-920295 "pgrep -a kubelet"
I0120 16:16:36.232674  348924 config.go:182] Loaded profile config "auto-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sjqgf" [73a2321e-9ea9-4945-8e1f-48d6eba2e27a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sjqgf" [73a2321e-9ea9-4945-8e1f-48d6eba2e27a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004846942s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vsrmp" [38b26687-47ad-4cad-8ab7-e0af720efa4c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003788686s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-920295 "pgrep -a kubelet"
I0120 16:16:49.689343  348924 config.go:182] Loaded profile config "kindnet-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vvw2q" [dca978ce-92ad-4536-a362-b4ddc415c008] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vvw2q" [dca978ce-92ad-4536-a362-b4ddc415c008] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003983331s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s2nrm" [1b653090-4c1d-45eb-80e9-937eb2799d3a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004652507s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (41.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (41.699399581s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (41.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-920295 "pgrep -a kubelet"
I0120 16:17:07.329586  348924 config.go:182] Loaded profile config "calico-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-920295 replace --force -f testdata/netcat-deployment.yaml
I0120 16:17:08.119297  348924 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ltfm9" [1e52dc83-256b-48c7-bacf-6c2352580453] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ltfm9" [1e52dc83-256b-48c7-bacf-6c2352580453] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004926126s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m2.898635524s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (43.931656205s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-920295 "pgrep -a kubelet"
I0120 16:17:45.296515  348924 config.go:182] Loaded profile config "custom-flannel-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wzgvn" [baec1a5f-f7e0-4242-8e7a-edcab3b65552] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wzgvn" [baec1a5f-f7e0-4242-8e7a-edcab3b65552] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004035027s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-920295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m2.51329161s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-920295 "pgrep -a kubelet"
I0120 16:18:22.371252  348924 config.go:182] Loaded profile config "enable-default-cni-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zc7h7" [08bbeea9-3814-4044-a184-0d397e9e0130] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zc7h7" [08bbeea9-3814-4044-a184-0d397e9e0130] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004077943s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-db5ff" [463c8354-6050-4711-ab1c-44b7569ad4a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00424793s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-920295 "pgrep -a kubelet"
I0120 16:18:29.196817  348924 config.go:182] Loaded profile config "flannel-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9rjsk" [11e528b4-10d0-4a7f-a572-eb7918403928] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9rjsk" [11e528b4-10d0-4a7f-a572-eb7918403928] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004641371s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-764969 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-764969 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m14.168140412s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-370720 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-370720 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (1m5.638714555s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-620031 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-620031 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (46.364550042s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-920295 "pgrep -a kubelet"
I0120 16:19:17.360716  348924 config.go:182] Loaded profile config "bridge-920295": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-920295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vnm7b" [2328d596-dcf2-49d3-be83-ce2ac780e969] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vnm7b" [2328d596-dcf2-49d3-be83-ce2ac780e969] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004186393s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-920295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-920295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E0120 16:24:27.283913  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:27.813247  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-620031 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [689ecf1b-7b4a-453a-b591-cebcaf63f412] Pending
helpers_test.go:344: "busybox" [689ecf1b-7b4a-453a-b591-cebcaf63f412] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [689ecf1b-7b4a-453a-b591-cebcaf63f412] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003715477s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-620031 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-779461 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-779461 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (41.562456008s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-620031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-620031 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-620031 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-620031 --alsologtostderr -v=3: (11.975910506s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-370720 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a84e815-8348-461c-b70e-41a8abc44a80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a84e815-8348-461c-b70e-41a8abc44a80] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003959436s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-370720 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-370720 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-370720 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056015541s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-370720 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620031 -n embed-certs-620031
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620031 -n embed-certs-620031: exit status 7 (76.606064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-620031 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-620031 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-620031 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m22.785007686s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-620031 -n embed-certs-620031
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-370720 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-370720 --alsologtostderr -v=3: (11.932263503s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-370720 -n no-preload-370720
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-370720 -n no-preload-370720: exit status 7 (89.705462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-370720 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-370720 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 16:20:26.230432  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-370720 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m22.91747566s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-370720 -n no-preload-370720
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-779461 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ecc355fc-62ce-4176-b65c-0d100a189f11] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ecc355fc-62ce-4176-b65c-0d100a189f11] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003972457s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-779461 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-779461 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-779461 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-779461 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-779461 --alsologtostderr -v=3: (12.044836019s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-764969 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8f58b82d-b468-4ab0-bdac-4c61a717534f] Pending
helpers_test.go:344: "busybox" [8f58b82d-b468-4ab0-bdac-4c61a717534f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8f58b82d-b468-4ab0-bdac-4c61a717534f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004296406s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-764969 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461: exit status 7 (92.267996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-779461 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-779461 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-779461 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (4m23.095765043s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-764969 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-764969 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-764969 --alsologtostderr -v=3
E0120 16:21:05.152541  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-764969 --alsologtostderr -v=3: (12.43127171s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-764969 -n old-k8s-version-764969
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-764969 -n old-k8s-version-764969: exit status 7 (91.076142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-764969 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (125.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-764969 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0120 16:21:36.456907  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.463388  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.474795  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.496303  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.537796  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.619381  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:36.781374  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:37.102893  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:37.744961  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:39.026607  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:41.588262  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.423983  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.430387  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.441788  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.463182  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.504574  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.586024  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:43.747550  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:44.069616  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:44.711054  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:45.992939  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:46.709720  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:48.554568  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:53.676258  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:21:56.951317  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:00.984271  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:00.990653  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.001995  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.023386  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.064631  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.146106  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.307681  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:01.629170  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:02.270477  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:03.552352  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:03.918006  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:06.114612  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:11.236113  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:17.433461  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:21.477816  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:24.399459  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:28.228103  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/functional-961919/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:41.959242  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.522205  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.528577  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.540711  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.562133  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.603856  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.686184  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:45.848521  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:46.170796  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:46.812157  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:48.093789  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:50.655887  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:55.777810  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:22:58.395752  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:05.361649  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kindnet-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:06.019769  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-764969 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m5.277381219s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-764969 -n old-k8s-version-764969
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (125.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjdrf" [1b2a773f-5674-4808-9f7e-1702e9f5e3f5] Running
E0120 16:23:22.588462  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.594913  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.606260  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.627659  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.668932  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.750487  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.885197  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.891636  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.903008  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.912390  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.920735  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.925402  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:22.966790  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:23.048299  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:23.209577  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:23.233978  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:23.531738  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:23.875487  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:24.173109  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004214683s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjdrf" [1b2a773f-5674-4808-9f7e-1702e9f5e3f5] Running
E0120 16:23:25.157744  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:25.455476  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:26.501120  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:27.719239  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:28.017667  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003797727s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-764969 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-764969 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-764969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-764969 -n old-k8s-version-764969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-764969 -n old-k8s-version-764969: exit status 2 (301.996528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-764969 -n old-k8s-version-764969
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-764969 -n old-k8s-version-764969: exit status 2 (309.026308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-764969 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-764969 -n old-k8s-version-764969
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-764969 -n old-k8s-version-764969
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-604931 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 16:23:43.083142  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:23:43.382059  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-604931 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (26.722838189s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-604931 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-604931 --alsologtostderr -v=3
E0120 16:24:03.565484  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:03.864226  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-604931 --alsologtostderr -v=3: (1.764487477s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604931 -n newest-cni-604931
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604931 -n newest-cni-604931: exit status 7 (79.092353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-604931 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-604931 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E0120 16:24:07.463088  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/custom-flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-604931 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (12.562418662s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-604931 -n newest-cni-604931
E0120 16:24:17.560081  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:17.566566  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:17.578083  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:17.599732  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:17.641342  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:17.722826  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-604931 image list --format=json
E0120 16:24:17.884731  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-604931 --alsologtostderr -v=1
E0120 16:24:18.206654  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:18.848959  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604931 -n newest-cni-604931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604931 -n newest-cni-604931: exit status 2 (301.907442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604931 -n newest-cni-604931
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604931 -n newest-cni-604931: exit status 2 (298.293255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-604931 --alsologtostderr -v=1
E0120 16:24:20.130267  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-604931 -n newest-cni-604931
E0120 16:24:20.317214  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/auto-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-604931 -n newest-cni-604931
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5nftd" [d3549e93-7495-41fe-ba89-04ed12005e33] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00464173s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5nftd" [d3549e93-7495-41fe-ba89-04ed12005e33] Running
E0120 16:24:38.055134  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/bridge-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00362044s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-620031 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-620031 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-620031 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620031 -n embed-certs-620031
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620031 -n embed-certs-620031: exit status 2 (296.137366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620031 -n embed-certs-620031
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620031 -n embed-certs-620031: exit status 2 (321.569236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-620031 --alsologtostderr -v=1
E0120 16:24:44.527377  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/enable-default-cni-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-620031 -n embed-certs-620031
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-620031 -n embed-certs-620031
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vvqjl" [fd5146ff-e6a5-46f0-822b-2397f25c5412] Running
E0120 16:24:44.826058  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/flannel-920295/client.crt: no such file or directory" logger="UnhandledError"
E0120 16:24:44.842443  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/calico-920295/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003745197s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-vvqjl" [fd5146ff-e6a5-46f0-822b-2397f25c5412] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003582618s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-370720 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-370720 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-370720 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-370720 -n no-preload-370720
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-370720 -n no-preload-370720: exit status 2 (286.387693ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-370720 -n no-preload-370720
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-370720 -n no-preload-370720: exit status 2 (286.157896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-370720 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-370720 -n no-preload-370720
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-370720 -n no-preload-370720
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mt7dk" [80d012e4-a7fa-4726-bec6-a317d557af75] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003218631s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-mt7dk" [80d012e4-a7fa-4726-bec6-a317d557af75] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004472586s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-779461 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-779461 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-779461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
E0120 16:25:26.231045  348924 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/addons-766086/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461: exit status 2 (286.788386ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461: exit status 2 (287.389837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-779461 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-779461 -n default-k8s-diff-port-779461
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.57s)

                                                
                                    

Test skip (24/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-920295 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-157019
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:14:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183786
contexts:
- context:
cluster: cert-expiration-157019
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-157019
name: cert-expiration-157019
- context:
cluster: kubernetes-upgrade-183786
user: kubernetes-upgrade-183786
name: kubernetes-upgrade-183786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-157019
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.key
- name: kubernetes-upgrade-183786
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-920295

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920295"

                                                
                                                
----------------------- debugLogs end: kubenet-920295 [took: 2.918530396s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-920295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-920295
--- SKIP: TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-920295 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-920295" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-157019
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20109-341858/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:14:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-183786
contexts:
- context:
cluster: cert-expiration-157019
extensions:
- extension:
last-update: Mon, 20 Jan 2025 16:12:50 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-157019
name: cert-expiration-157019
- context:
cluster: kubernetes-upgrade-183786
user: kubernetes-upgrade-183786
name: kubernetes-upgrade-183786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-157019
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/cert-expiration-157019/client.key
- name: kubernetes-upgrade-183786
user:
client-certificate: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.crt
client-key: /home/jenkins/minikube-integration/20109-341858/.minikube/profiles/kubernetes-upgrade-183786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-920295

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-920295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920295"

                                                
                                                
----------------------- debugLogs end: cilium-920295 [took: 3.420640088s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-920295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-920295
--- SKIP: TestNetworkPlugins/group/cilium (3.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-814813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-814813
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard