Test Report: KVM_Linux_containerd 20321

                    
                      2564366430c28bc1e44cd7de7532514f5935ec82:2025-01-27:38096
                    
                

Test fail (1/328)

Order failed test Duration
90 TestFunctional/parallel/DashboardCmd 4.6
x
+
TestFunctional/parallel/DashboardCmd (4.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-519899 --alsologtostderr -v=1] stderr:
I0127 14:15:38.722922  499148 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:38.723055  499148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:38.723063  499148 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:38.723068  499148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:38.723238  499148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:38.723484  499148 mustload.go:65] Loading cluster: functional-519899
I0127 14:15:38.723867  499148 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:38.724257  499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.724308  499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.741345  499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45001
I0127 14:15:38.741937  499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.742528  499148 main.go:141] libmachine: Using API Version  1
I0127 14:15:38.742555  499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.742940  499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.743210  499148 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:38.745010  499148 host.go:66] Checking if "functional-519899" exists ...
I0127 14:15:38.745462  499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.745520  499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.761409  499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44595
I0127 14:15:38.761989  499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.762527  499148 main.go:141] libmachine: Using API Version  1
I0127 14:15:38.762555  499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.762903  499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.763118  499148 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:38.763287  499148 api_server.go:166] Checking apiserver status ...
I0127 14:15:38.763349  499148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 14:15:38.763384  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:38.766405  499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.766951  499148 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:38.766985  499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.767050  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:38.767289  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:38.767447  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:38.767628  499148 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:38.853796  499148 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4327/cgroup
W0127 14:15:38.863397  499148 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4327/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0127 14:15:38.863492  499148 ssh_runner.go:195] Run: ls
I0127 14:15:38.867776  499148 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8441/healthz ...
I0127 14:15:38.873677  499148 api_server.go:279] https://192.168.39.137:8441/healthz returned 200:
ok
W0127 14:15:38.873745  499148 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0127 14:15:38.873972  499148 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:38.873995  499148 addons.go:69] Setting dashboard=true in profile "functional-519899"
I0127 14:15:38.874005  499148 addons.go:238] Setting addon dashboard=true in "functional-519899"
I0127 14:15:38.874038  499148 host.go:66] Checking if "functional-519899" exists ...
I0127 14:15:38.874482  499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.874629  499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.892003  499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
I0127 14:15:38.892491  499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.893141  499148 main.go:141] libmachine: Using API Version  1
I0127 14:15:38.893175  499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.893547  499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.894280  499148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:38.894332  499148 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:38.916126  499148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38513
I0127 14:15:38.916640  499148 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:38.917266  499148 main.go:141] libmachine: Using API Version  1
I0127 14:15:38.917288  499148 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:38.918847  499148 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:38.919072  499148 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:38.920920  499148 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:38.923567  499148 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0127 14:15:38.925147  499148 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0127 14:15:38.926395  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0127 14:15:38.926413  499148 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0127 14:15:38.926450  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:38.931230  499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.931675  499148 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:38.931715  499148 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:38.931859  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:38.932078  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:38.932222  499148 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:38.932327  499148 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:39.090314  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0127 14:15:39.090374  499148 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0127 14:15:39.115522  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0127 14:15:39.115551  499148 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0127 14:15:39.137485  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0127 14:15:39.137517  499148 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0127 14:15:39.156191  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0127 14:15:39.156221  499148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0127 14:15:39.173355  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0127 14:15:39.173394  499148 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0127 14:15:39.191698  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0127 14:15:39.191736  499148 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0127 14:15:39.209754  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0127 14:15:39.209788  499148 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0127 14:15:39.227581  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0127 14:15:39.227613  499148 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0127 14:15:39.244668  499148 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:15:39.244730  499148 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0127 14:15:39.261750  499148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0127 14:15:40.413853  499148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.152040692s)
I0127 14:15:40.413946  499148 main.go:141] libmachine: Making call to close driver server
I0127 14:15:40.413970  499148 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:40.414344  499148 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:40.414404  499148 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:40.414416  499148 main.go:141] libmachine: Making call to close driver server
I0127 14:15:40.414425  499148 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:40.414366  499148 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:40.414725  499148 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:40.414747  499148 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:40.414748  499148 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:40.416626  499148 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-519899 addons enable metrics-server

                                                
                                                
I0127 14:15:40.418482  499148 addons.go:201] Writing out "functional-519899" config to set dashboard=true...
W0127 14:15:40.418794  499148 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0127 14:15:40.419653  499148 kapi.go:59] client config for functional-519899: &rest.Config{Host:"https://192.168.39.137:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt", KeyFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.key", CAFile:"/home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c3e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0127 14:15:40.430047  499148 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  346761f1-3c5c-4b64-a594-07e84a1a22ea 812 0 2025-01-27 14:15:40 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-01-27 14:15:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.84.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.84.190],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0127 14:15:40.430197  499148 out.go:270] * Launching proxy ...
* Launching proxy ...
I0127 14:15:40.430282  499148 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-519899 proxy --port 36195]
I0127 14:15:40.430615  499148 dashboard.go:157] Waiting for kubectl to output host:port ...
I0127 14:15:40.480537  499148 out.go:201] 
W0127 14:15:40.481919  499148 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0127 14:15:40.481937  499148 out.go:270] * 
* 
W0127 14:15:40.485316  499148 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0127 14:15:40.486748  499148 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-519899 -n functional-519899
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 logs -n 25: (2.005516421s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                     Args                                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-519899 ssh findmnt                                                 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                        |                   |         |         |                     |                     |
	| mount     | -p functional-519899                                                          | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port1769431418/001:/mount-9p           |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                        |                   |         |         |                     |                     |
	| image     | functional-519899 image ls                                                    | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /etc/ssl/certs/491036.pem                                                     |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /usr/share/ca-certificates/491036.pem                                         |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh findmnt                                                 | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | -T /mount-9p | grep 9p                                                        |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /etc/ssl/certs/51391683.0                                                     |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh -- ls                                                   | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | -la /mount-9p                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /etc/ssl/certs/4910362.pem                                                    |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh cat                                                     | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /mount-9p/test-1737987331440322228                                            |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /usr/share/ca-certificates/4910362.pem                                        |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                                     |                   |         |         |                     |                     |
	| ssh       | functional-519899 ssh sudo cat                                                | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /etc/test/nested/copy/491036/hosts                                            |                   |         |         |                     |                     |
	| start     | -p functional-519899                                                          | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | --dry-run --memory                                                            |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                       |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                                 |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                                |                   |         |         |                     |                     |
	| start     | -p functional-519899                                                          | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | --dry-run --alsologtostderr                                                   |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                                |                   |         |         |                     |                     |
	| start     | -p functional-519899                                                          | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | --dry-run --memory                                                            |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                       |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                                 |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                                |                   |         |         |                     |                     |
	| image     | functional-519899 image load --daemon                                         | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | kicbase/echo-server:functional-519899                                         |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image     | functional-519899 image ls                                                    | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	| image     | functional-519899 image save kicbase/echo-server:functional-519899            | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image     | functional-519899 image rm                                                    | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | kicbase/echo-server:functional-519899                                         |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image     | functional-519899 image ls                                                    | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	| image     | functional-519899 image load                                                  | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image     | functional-519899 image ls                                                    | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	| image     | functional-519899 image save --daemon                                         | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC | 27 Jan 25 14:15 UTC |
	|           | kicbase/echo-server:functional-519899                                         |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                            | functional-519899 | jenkins | v1.35.0 | 27 Jan 25 14:15 UTC |                     |
	|           | -p functional-519899                                                          |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                        |                   |         |         |                     |                     |
	|-----------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:15:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:15:33.855779  498871 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:15:33.855890  498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.855902  498871 out.go:358] Setting ErrFile to fd 2...
	I0127 14:15:33.855908  498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.856175  498871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:15:33.856747  498871 out.go:352] Setting JSON to false
	I0127 14:15:33.857773  498871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14282,"bootTime":1737973052,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:15:33.857849  498871 start.go:139] virtualization: kvm guest
	I0127 14:15:33.860051  498871 out.go:177] * [functional-519899] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 14:15:33.861491  498871 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:15:33.861513  498871 notify.go:220] Checking for updates...
	I0127 14:15:33.864184  498871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:15:33.865390  498871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 14:15:33.866559  498871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 14:15:33.867781  498871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:15:33.868909  498871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:15:33.870723  498871 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:15:33.871108  498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.871189  498871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.887261  498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0127 14:15:33.887730  498871 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.888303  498871 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.888323  498871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.888701  498871 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.888886  498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.889195  498871 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:15:33.889495  498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.889535  498871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.905438  498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0127 14:15:33.905881  498871 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.906378  498871 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.906409  498871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.906711  498871 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.906892  498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.942169  498871 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 14:15:33.943335  498871 start.go:297] selected driver: kvm2
	I0127 14:15:33.943346  498871 start.go:901] validating driver "kvm2" against &{Name:functional-519899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-519899 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:15:33.943448  498871 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:15:33.945369  498871 out.go:201] 
	W0127 14:15:33.946502  498871 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:15:33.947718  498871 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ca7980858410d       9bea9f2796e23       1 second ago         Running             myfrontend                0                   85df2b7f12738       sp-pod
	fff6187880a5c       82e4c8a736a4f       18 seconds ago       Running             echoserver                0                   a2c4307764681       hello-node-connect-58f9cf68d8-dzvt5
	4a8f84d776cb6       82e4c8a736a4f       18 seconds ago       Running             echoserver                0                   469f088145ca4       hello-node-fcfd88b6f-ph7x4
	741e2b57791f2       6e38f40d628db       38 seconds ago       Running             storage-provisioner       4                   d6a43a83e09ac       storage-provisioner
	a222c673336ad       6e38f40d628db       49 seconds ago       Exited              storage-provisioner       3                   d6a43a83e09ac       storage-provisioner
	f4be7e8aeca19       e29f9c7391fd9       49 seconds ago       Running             kube-proxy                2                   75ef3881a3df6       kube-proxy-vntzg
	7f06a7696c358       c69fa2e9cbf5f       49 seconds ago       Running             coredns                   2                   11dd732b41f00       coredns-668d6bf9bc-jghnv
	282819135765c       95c0bda56fc4d       53 seconds ago       Running             kube-apiserver            0                   af304093a19da       kube-apiserver-functional-519899
	f74175cfd4f74       2b0d6572d062c       53 seconds ago       Running             kube-scheduler            2                   928139d9847a6       kube-scheduler-functional-519899
	e98ee85ad6055       019ee182b58e2       53 seconds ago       Running             kube-controller-manager   2                   57fce32a8ba3e       kube-controller-manager-functional-519899
	ed421f8a47a1e       a9e7e6b294baf       53 seconds ago       Running             etcd                      2                   1bb0bb8f6a1c5       etcd-functional-519899
	d484539158587       019ee182b58e2       About a minute ago   Exited              kube-controller-manager   1                   57fce32a8ba3e       kube-controller-manager-functional-519899
	16105ddecb22b       2b0d6572d062c       About a minute ago   Exited              kube-scheduler            1                   928139d9847a6       kube-scheduler-functional-519899
	0cc5248f36c04       a9e7e6b294baf       About a minute ago   Exited              etcd                      1                   1bb0bb8f6a1c5       etcd-functional-519899
	9e28ff4b65aa1       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   1                   11dd732b41f00       coredns-668d6bf9bc-jghnv
	407e9934802a4       e29f9c7391fd9       About a minute ago   Exited              kube-proxy                1                   75ef3881a3df6       kube-proxy-vntzg
	
	
	==> containerd <==
	Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.917440023Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\""
	Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.920239529Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-519899\""
	Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.922725818Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Jan 27 14:15:35 functional-519899 containerd[3544]: time="2025-01-27T14:15:35.932721825Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\" returns successfully"
	Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.182497274Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-519899\""
	Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.190248574Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:36 functional-519899 containerd[3544]: time="2025-01-27T14:15:36.190828370Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-519899\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.215129212Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\""
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229583775Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-519899\" returns successfully"
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229856542Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-519899\""
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.229999057Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.887463497Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-519899\""
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.891685622Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:37 functional-519899 containerd[3544]: time="2025-01-27T14:15:37.892128541Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-519899\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.208862331Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.216268695Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=72091372"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.221197069Z" level=info msg="ImageCreate event name:\"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.229579400Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.236587174Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a\", size \"72080558\" in 13.2274246s"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.236657461Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1\""
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.245944401Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.247717897Z" level=info msg="CreateContainer within sandbox \"85df2b7f12738b5015b566578de43370ff595be53bae8c4b3b8de67a7394d790\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.293535520Z" level=info msg="CreateContainer within sandbox \"85df2b7f12738b5015b566578de43370ff595be53bae8c4b3b8de67a7394d790\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\""
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.294460557Z" level=info msg="StartContainer for \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\""
	Jan 27 14:15:40 functional-519899 containerd[3544]: time="2025-01-27T14:15:40.396853443Z" level=info msg="StartContainer for \"ca7980858410de6a0c152a4e6a4926486c4a14f1111f4521a7942b3f67e30337\" returns successfully"
	
	
	==> coredns [7f06a7696c35892350d986f6ca4c5539a80c135ac380e7b95728d45b7fa2f78e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42270 - 16175 "HINFO IN 737029073090806261.4182980186629237895. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.159662686s
	
	
	==> coredns [9e28ff4b65aa1b5fab470c6ea6f44ccc628f999e78ef7c106ae1466423c265f7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58078 - 36221 "HINFO IN 7436575724398093421.2335735018293160571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021117483s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-519899
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-519899
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6a4bc0699f1a012c34860b426fc47f95a8e8743
	                    minikube.k8s.io/name=functional-519899
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T14_13_20_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 14:13:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-519899
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 14:15:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 14:14:51 +0000   Mon, 27 Jan 2025 14:13:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 14:14:51 +0000   Mon, 27 Jan 2025 14:13:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 14:14:51 +0000   Mon, 27 Jan 2025 14:13:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 14:14:51 +0000   Mon, 27 Jan 2025 14:13:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    functional-519899
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a26fc175814d45c6af835d6f84a794a3
	  System UUID:                a26fc175-814d-45c6-af83-5d6f84a794a3
	  Boot ID:                    c62c33cd-55d8-42a7-b1af-368c686d6579
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  default                     hello-node-connect-58f9cf68d8-dzvt5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     hello-node-fcfd88b6f-ph7x4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     mysql-58ccfd96bb-htngb                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-668d6bf9bc-jghnv                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m17s
	  kube-system                 etcd-functional-519899                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m21s
	  kube-system                 kube-apiserver-functional-519899              250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-functional-519899     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-vntzg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-functional-519899              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-nz2b5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-xww99         0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m15s                kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 2m22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s                kubelet          Node functional-519899 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s                kubelet          Node functional-519899 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s                kubelet          Node functional-519899 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m21s                kubelet          Node functional-519899 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m18s                node-controller  Node functional-519899 event: Registered Node functional-519899 in Controller
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node functional-519899 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node functional-519899 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node functional-519899 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node functional-519899 event: Registered Node functional-519899 in Controller
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node functional-519899 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node functional-519899 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)    kubelet          Node functional-519899 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           47s                  node-controller  Node functional-519899 event: Registered Node functional-519899 in Controller
	
	
	==> dmesg <==
	[  +0.317655] systemd-fstab-generator[2171]: Ignoring "noauto" option for root device
	[  +0.084276] kauditd_printk_skb: 88 callbacks suppressed
	[  +1.530275] systemd-fstab-generator[2326]: Ignoring "noauto" option for root device
	[  +5.838432] kauditd_printk_skb: 40 callbacks suppressed
	[ +10.151442] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.589419] systemd-fstab-generator[2812]: Ignoring "noauto" option for root device
	[Jan27 14:14] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.041908] systemd-fstab-generator[3109]: Ignoring "noauto" option for root device
	[ +11.717923] systemd-fstab-generator[3469]: Ignoring "noauto" option for root device
	[  +0.083662] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.071094] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
	[  +0.186325] systemd-fstab-generator[3495]: Ignoring "noauto" option for root device
	[  +0.140064] systemd-fstab-generator[3507]: Ignoring "noauto" option for root device
	[  +0.295330] systemd-fstab-generator[3536]: Ignoring "noauto" option for root device
	[  +1.839889] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[ +10.893667] kauditd_printk_skb: 125 callbacks suppressed
	[  +6.527176] systemd-fstab-generator[4114]: Ignoring "noauto" option for root device
	[  +4.254109] kauditd_printk_skb: 39 callbacks suppressed
	[Jan27 14:15] kauditd_printk_skb: 15 callbacks suppressed
	[  +4.928007] systemd-fstab-generator[4675]: Ignoring "noauto" option for root device
	[  +6.693240] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.000419] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.678643] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.589000] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.918214] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [0cc5248f36c0429b2e0c6fa9eeb92986bc137c8d5b795c6bb1129aea2312e9a0] <==
	{"level":"info","ts":"2025-01-27T14:13:58.434273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T14:13:58.434356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 2"}
	{"level":"info","ts":"2025-01-27T14:13:58.434417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T14:13:58.434437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2025-01-27T14:13:58.434453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 3"}
	{"level":"info","ts":"2025-01-27T14:13:58.434505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 3"}
	{"level":"info","ts":"2025-01-27T14:13:58.437120Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:functional-519899 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T14:13:58.437394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:13:58.437469Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:13:58.437632Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:13:58.437783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:13:58.438552Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:13:58.438566Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:13:58.439358Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:13:58.439366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
	{"level":"info","ts":"2025-01-27T14:14:40.837074Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-01-27T14:14:40.837111Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-519899","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	{"level":"warn","ts":"2025-01-27T14:14:40.837208Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:14:40.837234Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:14:40.838853Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-01-27T14:14:40.838954Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.137:2379: use of closed network connection"}
	{"level":"info","ts":"2025-01-27T14:14:40.839017Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5527995f6263874a","current-leader-member-id":"5527995f6263874a"}
	{"level":"info","ts":"2025-01-27T14:14:40.842975Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2025-01-27T14:14:40.843189Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.137:2380"}
	{"level":"info","ts":"2025-01-27T14:14:40.843214Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-519899","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.137:2380"],"advertise-client-urls":["https://192.168.39.137:2379"]}
	
	
	==> etcd [ed421f8a47a1e10ea213aec6181fecb128764248a1f249afef93b17940bcfe5b] <==
	{"level":"info","ts":"2025-01-27T14:14:50.011938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a is starting a new election at term 3"}
	{"level":"info","ts":"2025-01-27T14:14:50.011997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became pre-candidate at term 3"}
	{"level":"info","ts":"2025-01-27T14:14:50.012030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgPreVoteResp from 5527995f6263874a at term 3"}
	{"level":"info","ts":"2025-01-27T14:14:50.012054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became candidate at term 4"}
	{"level":"info","ts":"2025-01-27T14:14:50.012062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a received MsgVoteResp from 5527995f6263874a at term 4"}
	{"level":"info","ts":"2025-01-27T14:14:50.012070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5527995f6263874a became leader at term 4"}
	{"level":"info","ts":"2025-01-27T14:14:50.012076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5527995f6263874a elected leader 5527995f6263874a at term 4"}
	{"level":"info","ts":"2025-01-27T14:14:50.014046Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5527995f6263874a","local-member-attributes":"{Name:functional-519899 ClientURLs:[https://192.168.39.137:2379]}","request-path":"/0/members/5527995f6263874a/attributes","cluster-id":"8623b2a8b011233f","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T14:14:50.014086Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:14:50.014419Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T14:14:50.014491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T14:14:50.014522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T14:14:50.015036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:14:50.015179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T14:14:50.015869Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T14:14:50.016117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.137:2379"}
	{"level":"info","ts":"2025-01-27T14:15:39.532896Z","caller":"traceutil/trace.go:171","msg":"trace[1772122477] linearizableReadLoop","detail":"{readStateIndex:829; appliedIndex:828; }","duration":"429.826216ms","start":"2025-01-27T14:15:39.103036Z","end":"2025-01-27T14:15:39.532862Z","steps":["trace[1772122477] 'read index received'  (duration: 429.639326ms)","trace[1772122477] 'applied index is now lower than readState.Index'  (duration: 186.477µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T14:15:39.533058Z","caller":"traceutil/trace.go:171","msg":"trace[2121414342] transaction","detail":"{read_only:false; response_revision:756; number_of_response:1; }","duration":"434.818863ms","start":"2025-01-27T14:15:39.098233Z","end":"2025-01-27T14:15:39.533052Z","steps":["trace[2121414342] 'process raft request'  (duration: 434.471527ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:15:39.533865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"406.424653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:15:39.533935Z","caller":"traceutil/trace.go:171","msg":"trace[1200541576] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"407.202409ms","start":"2025-01-27T14:15:39.126700Z","end":"2025-01-27T14:15:39.533902Z","steps":["trace[1200541576] 'agreement among raft nodes before linearized reading'  (duration: 406.426236ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:15:39.533964Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.126685Z","time spent":"407.270004ms","remote":"127.0.0.1:57394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:15:39.534132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"431.092058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T14:15:39.534153Z","caller":"traceutil/trace.go:171","msg":"trace[1226083313] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:756; }","duration":"431.130678ms","start":"2025-01-27T14:15:39.103014Z","end":"2025-01-27T14:15:39.534145Z","steps":["trace[1226083313] 'agreement among raft nodes before linearized reading'  (duration: 431.093735ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T14:15:39.534167Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.103002Z","time spent":"431.161833ms","remote":"127.0.0.1:57394","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-01-27T14:15:39.535022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-27T14:15:39.098215Z","time spent":"434.86301ms","remote":"127.0.0.1:57368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:755 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 14:15:42 up 3 min,  0 users,  load average: 0.72, 0.43, 0.18
	Linux functional-519899 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [282819135765c81958b2c63cc44b4e9218ba2aaa14383527f00985ad1a362269] <==
	I0127 14:14:51.274215       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 14:14:51.274219       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 14:14:51.274224       1 cache.go:39] Caches are synced for autoregister controller
	I0127 14:14:51.274682       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 14:14:51.275046       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 14:14:51.276009       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 14:14:51.292618       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 14:14:51.298740       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 14:14:51.531610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 14:14:52.091165       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0127 14:14:52.286685       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.137]
	I0127 14:14:52.288134       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 14:14:52.292656       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 14:14:52.710318       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 14:14:52.765380       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 14:14:52.792223       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 14:14:52.798419       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 14:14:54.392192       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 14:15:15.161877       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.129.163"}
	I0127 14:15:19.888384       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.142.149"}
	I0127 14:15:20.471689       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.36.209"}
	I0127 14:15:34.069676       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.24.35"}
	I0127 14:15:39.959310       1 controller.go:615] quota admission added evaluator for: namespaces
	I0127 14:15:40.366653       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.84.190"}
	I0127 14:15:40.402424       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.88.162"}
	
	
	==> kube-controller-manager [d484539158587ce134ce60261c06c2d8bd4481b47698c606582458253761eb7f] <==
	I0127 14:14:02.774955       1 shared_informer.go:320] Caches are synced for GC
	I0127 14:14:02.775132       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 14:14:02.775253       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 14:14:02.775346       1 shared_informer.go:320] Caches are synced for taint
	I0127 14:14:02.775468       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 14:14:02.775570       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-519899"
	I0127 14:14:02.775618       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 14:14:02.778557       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 14:14:02.781969       1 shared_informer.go:320] Caches are synced for HPA
	I0127 14:14:02.783135       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 14:14:02.783270       1 shared_informer.go:320] Caches are synced for job
	I0127 14:14:02.787662       1 shared_informer.go:320] Caches are synced for disruption
	I0127 14:14:02.798974       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 14:14:02.800132       1 shared_informer.go:320] Caches are synced for node
	I0127 14:14:02.800208       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 14:14:02.800423       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 14:14:02.800532       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 14:14:02.800551       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 14:14:02.800730       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-519899"
	I0127 14:14:02.814321       1 shared_informer.go:320] Caches are synced for namespace
	I0127 14:14:03.136962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.537418ms"
	I0127 14:14:03.137397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="364.942µs"
	I0127 14:14:13.093568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.626256ms"
	I0127 14:14:13.094201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.722µs"
	I0127 14:14:30.431010       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-519899"
	
	
	==> kube-controller-manager [e98ee85ad6055058a657a71333fbf0be936a7aa7d1325e16411ae9aeb155e26d] <==
	I0127 14:15:34.171082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="25.311µs"
	I0127 14:15:40.119395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="65.237677ms"
	E0127 14:15:40.119515       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.130177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="36.294686ms"
	E0127 14:15:40.130331       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.145851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="24.852637ms"
	E0127 14:15:40.145890       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.146100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="13.067164ms"
	E0127 14:15:40.146220       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.156250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="8.545707ms"
	E0127 14:15:40.156292       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.158210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="10.491623ms"
	E0127 14:15:40.158245       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.175941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.611277ms"
	E0127 14:15:40.175977       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.176236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="15.922334ms"
	E0127 14:15:40.176270       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0127 14:15:40.232977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="52.343098ms"
	I0127 14:15:40.283097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.799247ms"
	I0127 14:15:40.309912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="76.88538ms"
	I0127 14:15:40.310003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="35.669µs"
	I0127 14:15:40.321089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="89.419µs"
	I0127 14:15:40.345904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="62.750262ms"
	I0127 14:15:40.364972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="19.03272ms"
	I0127 14:15:40.365056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="57.422µs"
	
	
	==> kube-proxy [407e9934802a471e3288501bb19fe6ee487cd53d3efe3ab71663e83263f26dbc] <==
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:13:45.337535       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
	E0127 14:13:46.376998       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
	E0127 14:13:48.647165       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
	E0127 14:13:53.323875       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-519899\": dial tcp 192.168.39.137:8441: connect: connection refused"
	I0127 14:14:02.269377       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0127 14:14:02.269453       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:14:02.304140       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:14:02.304203       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:14:02.304246       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:14:02.307820       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:14:02.308062       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:14:02.308092       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:14:02.309341       1 config.go:329] "Starting node config controller"
	I0127 14:14:02.309368       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:14:02.309892       1 config.go:199] "Starting service config controller"
	I0127 14:14:02.310054       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:14:02.310180       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:14:02.310323       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:14:02.410435       1 shared_informer.go:320] Caches are synced for service config
	I0127 14:14:02.410466       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:14:02.410535       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f4be7e8aeca194b85001740a7975077559aef807e9488d684eb457cb3621108d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 14:14:52.028359       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 14:14:52.038514       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	E0127 14:14:52.038988       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 14:14:52.067031       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 14:14:52.067251       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 14:14:52.067390       1 server_linux.go:170] "Using iptables Proxier"
	I0127 14:14:52.069627       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 14:14:52.070005       1 server.go:497] "Version info" version="v1.32.1"
	I0127 14:14:52.070270       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:14:52.071685       1 config.go:199] "Starting service config controller"
	I0127 14:14:52.072014       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 14:14:52.072222       1 config.go:105] "Starting endpoint slice config controller"
	I0127 14:14:52.072289       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 14:14:52.073012       1 config.go:329] "Starting node config controller"
	I0127 14:14:52.073105       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 14:14:52.172976       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 14:14:52.173438       1 shared_informer.go:320] Caches are synced for node config
	I0127 14:14:52.173454       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [16105ddecb22b1e64d2fc0db6a643848d215bbc5f317d4955bbb3da76fcf4e0e] <==
	I0127 14:13:57.961295       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:13:59.547901       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:13:59.547937       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:13:59.548277       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:13:59.548288       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:13:59.614392       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:13:59.616810       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:13:59.620849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:13:59.621096       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:13:59.623997       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:13:59.624256       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:13:59.722884       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 14:14:40.780134       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f74175cfd4f749a8d18354425fcbbb2fe74a810052cfa2dcce5d05d8e17a2e81] <==
	I0127 14:14:48.988602       1 serving.go:386] Generated self-signed cert in-memory
	W0127 14:14:51.120098       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 14:14:51.120146       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 14:14:51.120164       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 14:14:51.120362       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 14:14:51.176943       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 14:14:51.176980       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 14:14:51.192071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 14:14:51.192633       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 14:14:51.193376       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 14:14:51.193576       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 14:14:51.294866       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 14:15:18 functional-519899 kubelet[4121]: I0127 14:15:18.912579    4121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4s9xp\" (UniqueName: \"kubernetes.io/projected/c03b27b5-ad89-4109-a948-264dc777db6b-kube-api-access-4s9xp\") on node \"functional-519899\" DevicePath \"\""
	Jan 27 14:15:19 functional-519899 kubelet[4121]: I0127 14:15:19.922727    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqn4\" (UniqueName: \"kubernetes.io/projected/2e838724-4178-4ca0-b457-87236081912b-kube-api-access-nhqn4\") pod \"hello-node-fcfd88b6f-ph7x4\" (UID: \"2e838724-4178-4ca0-b457-87236081912b\") " pod="default/hello-node-fcfd88b6f-ph7x4"
	Jan 27 14:15:20 functional-519899 kubelet[4121]: I0127 14:15:20.527704    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvqk\" (UniqueName: \"kubernetes.io/projected/2127b456-24b1-4d0c-a5ac-bd2698d2facb-kube-api-access-jnvqk\") pod \"hello-node-connect-58f9cf68d8-dzvt5\" (UID: \"2127b456-24b1-4d0c-a5ac-bd2698d2facb\") " pod="default/hello-node-connect-58f9cf68d8-dzvt5"
	Jan 27 14:15:21 functional-519899 kubelet[4121]: I0127 14:15:21.483992    4121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03b27b5-ad89-4109-a948-264dc777db6b" path="/var/lib/kubelet/pods/c03b27b5-ad89-4109-a948-264dc777db6b/volumes"
	Jan 27 14:15:23 functional-519899 kubelet[4121]: I0127 14:15:23.702112    4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-fcfd88b6f-ph7x4" podStartSLOduration=2.134800251 podStartE2EDuration="4.702084734s" podCreationTimestamp="2025-01-27 14:15:19 +0000 UTC" firstStartedPulling="2025-01-27 14:15:20.320087291 +0000 UTC m=+32.962184600" lastFinishedPulling="2025-01-27 14:15:22.887371786 +0000 UTC m=+35.529469083" observedRunningTime="2025-01-27 14:15:23.68674138 +0000 UTC m=+36.328838698" watchObservedRunningTime="2025-01-27 14:15:23.702084734 +0000 UTC m=+36.344182050"
	Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.512808    4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-58f9cf68d8-dzvt5" podStartSLOduration=4.786147735 podStartE2EDuration="6.512785582s" podCreationTimestamp="2025-01-27 14:15:20 +0000 UTC" firstStartedPulling="2025-01-27 14:15:21.263985875 +0000 UTC m=+33.906083176" lastFinishedPulling="2025-01-27 14:15:22.990623726 +0000 UTC m=+35.632721023" observedRunningTime="2025-01-27 14:15:23.70268508 +0000 UTC m=+36.344782396" watchObservedRunningTime="2025-01-27 14:15:26.512785582 +0000 UTC m=+39.154882893"
	Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.676737    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkq47\" (UniqueName: \"kubernetes.io/projected/9ad21048-92b8-4b43-a8f3-a7c7597f7771-kube-api-access-bkq47\") pod \"sp-pod\" (UID: \"9ad21048-92b8-4b43-a8f3-a7c7597f7771\") " pod="default/sp-pod"
	Jan 27 14:15:26 functional-519899 kubelet[4121]: I0127 14:15:26.676964    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14595ad7-79ad-446f-a8c9-01ca94d81850\" (UniqueName: \"kubernetes.io/host-path/9ad21048-92b8-4b43-a8f3-a7c7597f7771-pvc-14595ad7-79ad-446f-a8c9-01ca94d81850\") pod \"sp-pod\" (UID: \"9ad21048-92b8-4b43-a8f3-a7c7597f7771\") " pod="default/sp-pod"
	Jan 27 14:15:33 functional-519899 kubelet[4121]: I0127 14:15:33.227926    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpc9\" (UniqueName: \"kubernetes.io/projected/eeb33ffb-4745-4aff-8600-d7989436ca01-kube-api-access-frpc9\") pod \"busybox-mount\" (UID: \"eeb33ffb-4745-4aff-8600-d7989436ca01\") " pod="default/busybox-mount"
	Jan 27 14:15:33 functional-519899 kubelet[4121]: I0127 14:15:33.228006    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/eeb33ffb-4745-4aff-8600-d7989436ca01-test-volume\") pod \"busybox-mount\" (UID: \"eeb33ffb-4745-4aff-8600-d7989436ca01\") " pod="default/busybox-mount"
	Jan 27 14:15:34 functional-519899 kubelet[4121]: I0127 14:15:34.235348    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48t4\" (UniqueName: \"kubernetes.io/projected/3896773c-b0be-414b-901e-7b018511c481-kube-api-access-g48t4\") pod \"mysql-58ccfd96bb-htngb\" (UID: \"3896773c-b0be-414b-901e-7b018511c481\") " pod="default/mysql-58ccfd96bb-htngb"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: W0127 14:15:40.228515    4121 reflector.go:569] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-519899" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-519899' and this object
	Jan 27 14:15:40 functional-519899 kubelet[4121]: E0127 14:15:40.228873    4121 reflector.go:166] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-519899\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-519899' and this object" logger="UnhandledError"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.229852    4121 status_manager.go:890] "Failed to get status for pod" podUID="4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a" pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99" err="pods \"kubernetes-dashboard-7779f9b69b-xww99\" is forbidden: User \"system:node:functional-519899\" cannot get resource \"pods\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-519899' and this object"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380241    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3d473bce-1697-4496-a82d-d858960734dd-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-nz2b5\" (UID: \"3d473bce-1697-4496-a82d-d858960734dd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380277    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d57mv\" (UniqueName: \"kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv\") pod \"dashboard-metrics-scraper-5d59dccf9b-nz2b5\" (UID: \"3d473bce-1697-4496-a82d-d858960734dd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380298    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-xww99\" (UID: \"4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99"
	Jan 27 14:15:40 functional-519899 kubelet[4121]: I0127 14:15:40.380313    4121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggp7\" (UniqueName: \"kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7\") pod \"kubernetes-dashboard-7779f9b69b-xww99\" (UID: \"4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99"
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490127    4121 projected.go:288] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490165    4121 projected.go:194] Error preparing data for projected volume kube-api-access-5ggp7 for pod kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-xww99: failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.490239    4121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7 podName:4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a nodeName:}" failed. No retries permitted until 2025-01-27 14:15:41.990217206 +0000 UTC m=+54.632314502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5ggp7" (UniqueName: "kubernetes.io/projected/4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a-kube-api-access-5ggp7") pod "kubernetes-dashboard-7779f9b69b-xww99" (UID: "4448ceaf-42ef-4523-b9cf-e15b6dbb8e9a") : failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492597    4121 projected.go:288] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492631    4121 projected.go:194] Error preparing data for projected volume kube-api-access-d57mv for pod kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-nz2b5: failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:41 functional-519899 kubelet[4121]: E0127 14:15:41.492681    4121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv podName:3d473bce-1697-4496-a82d-d858960734dd nodeName:}" failed. No retries permitted until 2025-01-27 14:15:41.992665395 +0000 UTC m=+54.634762692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d57mv" (UniqueName: "kubernetes.io/projected/3d473bce-1697-4496-a82d-d858960734dd-kube-api-access-d57mv") pod "dashboard-metrics-scraper-5d59dccf9b-nz2b5" (UID: "3d473bce-1697-4496-a82d-d858960734dd") : failed to sync configmap cache: timed out waiting for the condition
	Jan 27 14:15:42 functional-519899 kubelet[4121]: I0127 14:15:42.736798    4121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.498916398 podStartE2EDuration="16.736782332s" podCreationTimestamp="2025-01-27 14:15:26 +0000 UTC" firstStartedPulling="2025-01-27 14:15:27.006288643 +0000 UTC m=+39.648385951" lastFinishedPulling="2025-01-27 14:15:40.244154585 +0000 UTC m=+52.886251885" observedRunningTime="2025-01-27 14:15:40.729273605 +0000 UTC m=+53.371370923" watchObservedRunningTime="2025-01-27 14:15:42.736782332 +0000 UTC m=+55.378879669"
	
	
	==> storage-provisioner [741e2b57791f2a6b9cc82a3905c0407e120002a74d7042a0b10e24dda8771d0b] <==
	I0127 14:15:03.583036       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0127 14:15:03.592190       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0127 14:15:03.592317       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0127 14:15:20.992279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0127 14:15:20.992547       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-519899_442633ac-3473-4659-bbca-7d24631815f3!
	I0127 14:15:20.994925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce168104-e6dd-4f11-acf6-bb68648c0c5d", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-519899_442633ac-3473-4659-bbca-7d24631815f3 became leader
	I0127 14:15:21.092816       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-519899_442633ac-3473-4659-bbca-7d24631815f3!
	I0127 14:15:26.333138       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0127 14:15:26.333233       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4506b111-2e39-4585-8dd1-135ea464d2bf 343 0 2025-01-27 14:13:25 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-01-27 14:13:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-14595ad7-79ad-446f-a8c9-01ca94d81850 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  14595ad7-79ad-446f-a8c9-01ca94d81850 711 0 2025-01-27 14:15:26 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-01-27 14:15:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-01-27 14:15:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0127 14:15:26.334302       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"14595ad7-79ad-446f-a8c9-01ca94d81850", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0127 14:15:26.334520       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850" provisioned
	I0127 14:15:26.334549       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0127 14:15:26.334559       1 volume_store.go:212] Trying to save persistentvolume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850"
	I0127 14:15:26.346518       1 volume_store.go:219] persistentvolume "pvc-14595ad7-79ad-446f-a8c9-01ca94d81850" saved
	I0127 14:15:26.348882       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"14595ad7-79ad-446f-a8c9-01ca94d81850", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-14595ad7-79ad-446f-a8c9-01ca94d81850
	
	
	==> storage-provisioner [a222c673336ad7fe23ea996ccdf08c785794fff38802bbbf6670e06444f2312a] <==
	I0127 14:14:51.938428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0127 14:14:51.943524       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-519899 -n functional-519899
helpers_test.go:261: (dbg) Run:  kubectl --context functional-519899 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99: exit status 1 (77.738537ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-519899/192.168.39.137
	Start Time:       Mon, 27 Jan 2025 14:15:33 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://f3c9ec90b51495d344b9a86e51cdfabdec2d93082c369fdbbeffa401399807ef
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Jan 2025 14:15:41 +0000
	      Finished:     Mon, 27 Jan 2025 14:15:41 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frpc9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-frpc9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-519899
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.523s (8.176s including waiting). Image size: 2395207 bytes.
	  Normal  Created    2s    kubelet            Created container: mount-munger
	  Normal  Started    2s    kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-htngb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-519899/192.168.39.137
	Start Time:       Mon, 27 Jan 2025 14:15:34 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g48t4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g48t4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-htngb to functional-519899
	  Normal  Pulling    9s    kubelet            Pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-nz2b5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-xww99" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-519899 describe pod busybox-mount mysql-58ccfd96bb-htngb dashboard-metrics-scraper-5d59dccf9b-nz2b5 kubernetes-dashboard-7779f9b69b-xww99: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.60s)

                                                
                                    

Test pass (289/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.54
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.1/json-events 3.98
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.07
18 TestDownloadOnly/v1.32.1/DeleteAll 0.15
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.65
22 TestOffline 87.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 261.03
29 TestAddons/serial/Volcano 41.51
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.51
35 TestAddons/parallel/Registry 16.12
36 TestAddons/parallel/Ingress 19.08
37 TestAddons/parallel/InspektorGadget 11.78
38 TestAddons/parallel/MetricsServer 5.76
40 TestAddons/parallel/CSI 43.2
41 TestAddons/parallel/Headlamp 23.06
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 15.22
44 TestAddons/parallel/NvidiaDevicePlugin 6.5
45 TestAddons/parallel/Yakd 11.87
47 TestAddons/StoppedEnableDisable 91.28
48 TestCertOptions 60.62
49 TestCertExpiration 261.42
51 TestForceSystemdFlag 54.62
52 TestForceSystemdEnv 56.83
54 TestKVMDriverInstallOrUpdate 5.94
58 TestErrorSpam/setup 43.8
59 TestErrorSpam/start 0.38
60 TestErrorSpam/status 0.77
61 TestErrorSpam/pause 1.59
62 TestErrorSpam/unpause 1.64
63 TestErrorSpam/stop 3.67
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.04
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.87
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.83
75 TestFunctional/serial/CacheCmd/cache/add_local 1.77
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 46.45
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.31
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 4.72
89 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.79
97 TestFunctional/parallel/ServiceCmdConnect 10.53
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 58.9
101 TestFunctional/parallel/SSHCmd 0.46
102 TestFunctional/parallel/CpCmd 1.41
103 TestFunctional/parallel/MySQL 35.74
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.23
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/Version/components 0.51
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.27
131 TestFunctional/parallel/ImageCommands/Setup 6.92
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
137 TestFunctional/parallel/ServiceCmd/Format 0.33
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
139 TestFunctional/parallel/ServiceCmd/URL 0.33
140 TestFunctional/parallel/ProfileCmd/profile_list 0.44
141 TestFunctional/parallel/MountCmd/any-port 13.69
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.5
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
151 TestFunctional/parallel/MountCmd/specific-port 1.93
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.82
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 215.37
160 TestMultiControlPlane/serial/DeployApp 5.38
161 TestMultiControlPlane/serial/PingHostFromPods 1.23
162 TestMultiControlPlane/serial/AddWorkerNode 57.91
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
165 TestMultiControlPlane/serial/CopyFile 13.68
166 TestMultiControlPlane/serial/StopSecondaryNode 91.69
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
168 TestMultiControlPlane/serial/RestartSecondaryNode 40.29
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 432.97
171 TestMultiControlPlane/serial/DeleteSecondaryNode 6.93
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 272.98
174 TestMultiControlPlane/serial/RestartCluster 107.45
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
176 TestMultiControlPlane/serial/AddSecondaryNode 91.11
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
181 TestJSONOutput/start/Command 58.41
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.7
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.59
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 6.46
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 96.89
213 TestMountStart/serial/StartWithMountFirst 28.09
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 28.76
216 TestMountStart/serial/VerifyMountSecond 0.4
217 TestMountStart/serial/DeleteFirst 0.7
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.29
220 TestMountStart/serial/RestartStopped 23.14
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 116.4
225 TestMultiNode/serial/DeployApp2Nodes 4.15
226 TestMultiNode/serial/PingHostFrom2Pods 0.82
227 TestMultiNode/serial/AddNode 53.26
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.45
231 TestMultiNode/serial/StopNode 2.21
232 TestMultiNode/serial/StartAfterStop 34.47
233 TestMultiNode/serial/RestartKeepsNodes 312.12
234 TestMultiNode/serial/DeleteNode 2.28
235 TestMultiNode/serial/StopMultiNode 182.09
236 TestMultiNode/serial/RestartMultiNode 91.85
237 TestMultiNode/serial/ValidateNameConflict 43.8
242 TestPreload 226.9
244 TestScheduledStopUnix 116.68
248 TestRunningBinaryUpgrade 183.01
250 TestKubernetesUpgrade 138.11
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
262 TestStartStop/group/old-k8s-version/serial/FirstStart 164.92
263 TestNoKubernetes/serial/StartWithK8s 104.41
265 TestPause/serial/Start 78.73
266 TestNoKubernetes/serial/StartWithStopK8s 37.74
267 TestNoKubernetes/serial/Start 28.06
268 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
269 TestPause/serial/SecondStartNoReconfiguration 40.83
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
271 TestNoKubernetes/serial/ProfileList 2.04
272 TestNoKubernetes/serial/Stop 1.32
273 TestNoKubernetes/serial/StartNoArgs 21.96
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.61
275 TestStartStop/group/old-k8s-version/serial/Stop 91.82
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestNetworkPlugins/group/false 3.45
288 TestPause/serial/Pause 0.75
289 TestPause/serial/VerifyStatus 0.26
290 TestPause/serial/Unpause 0.64
291 TestPause/serial/PauseAgain 0.75
292 TestPause/serial/DeletePaused 1.05
293 TestPause/serial/VerifyDeletedResources 67.71
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
295 TestStartStop/group/old-k8s-version/serial/SecondStart 171.18
297 TestStartStop/group/no-preload/serial/FirstStart 101.52
299 TestStartStop/group/embed-certs/serial/FirstStart 70.14
300 TestStartStop/group/no-preload/serial/DeployApp 10.33
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
302 TestStartStop/group/no-preload/serial/Stop 91.28
303 TestStartStop/group/embed-certs/serial/DeployApp 8.27
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
305 TestStartStop/group/embed-certs/serial/Stop 91.52
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
309 TestStartStop/group/old-k8s-version/serial/Pause 2.6
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.84
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
313 TestStartStop/group/no-preload/serial/SecondStart 305.02
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
315 TestStartStop/group/embed-certs/serial/SecondStart 311.9
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.43
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.44
319 TestStoppedBinaryUpgrade/Setup 0.66
320 TestStoppedBinaryUpgrade/Upgrade 96.35
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 318.06
323 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
325 TestStartStop/group/newest-cni/serial/FirstStart 49.05
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
328 TestStartStop/group/newest-cni/serial/Stop 7.32
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
330 TestStartStop/group/newest-cni/serial/SecondStart 34.05
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
335 TestStartStop/group/newest-cni/serial/Pause 2.39
336 TestNetworkPlugins/group/auto/Start 56.79
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
339 TestStartStop/group/no-preload/serial/Pause 2.93
340 TestNetworkPlugins/group/flannel/Start 91.25
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
344 TestStartStop/group/embed-certs/serial/Pause 2.83
345 TestNetworkPlugins/group/enable-default-cni/Start 85.64
346 TestNetworkPlugins/group/auto/KubeletFlags 0.37
347 TestNetworkPlugins/group/auto/NetCatPod 11.26
348 TestNetworkPlugins/group/auto/DNS 0.16
349 TestNetworkPlugins/group/auto/Localhost 0.13
350 TestNetworkPlugins/group/auto/HairPin 0.14
351 TestNetworkPlugins/group/bridge/Start 60.51
352 TestNetworkPlugins/group/flannel/ControllerPod 6.01
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
354 TestNetworkPlugins/group/flannel/NetCatPod 12.23
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
357 TestNetworkPlugins/group/flannel/DNS 0.15
358 TestNetworkPlugins/group/flannel/Localhost 0.13
359 TestNetworkPlugins/group/flannel/HairPin 0.16
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.5
367 TestNetworkPlugins/group/calico/Start 87.97
368 TestNetworkPlugins/group/kindnet/Start 93.54
369 TestNetworkPlugins/group/custom-flannel/Start 127.52
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
371 TestNetworkPlugins/group/bridge/NetCatPod 10.23
372 TestNetworkPlugins/group/bridge/DNS 0.13
373 TestNetworkPlugins/group/bridge/Localhost 0.11
374 TestNetworkPlugins/group/bridge/HairPin 0.11
375 TestNetworkPlugins/group/calico/ControllerPod 6.01
376 TestNetworkPlugins/group/calico/KubeletFlags 0.23
377 TestNetworkPlugins/group/calico/NetCatPod 9.27
378 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
379 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
380 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
381 TestNetworkPlugins/group/calico/DNS 0.16
382 TestNetworkPlugins/group/calico/Localhost 0.13
383 TestNetworkPlugins/group/calico/HairPin 0.15
384 TestNetworkPlugins/group/kindnet/DNS 0.15
385 TestNetworkPlugins/group/kindnet/Localhost 0.13
386 TestNetworkPlugins/group/kindnet/HairPin 0.13
387 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
388 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.22
389 TestNetworkPlugins/group/custom-flannel/DNS 0.14
390 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
391 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (7.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-060086 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-060086 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.543338636s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 14:03:47.311867  491036 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0127 14:03:47.312020  491036 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-483699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-060086
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-060086: exit status 85 (66.825711ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-060086 | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC |          |
	|         | -p download-only-060086        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:03:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:03:39.816300  491048 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:03:39.816625  491048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:03:39.816636  491048 out.go:358] Setting ErrFile to fd 2...
	I0127 14:03:39.816640  491048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:03:39.816897  491048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	W0127 14:03:39.817104  491048 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20321-483699/.minikube/config/config.json: open /home/jenkins/minikube-integration/20321-483699/.minikube/config/config.json: no such file or directory
	I0127 14:03:39.817907  491048 out.go:352] Setting JSON to true
	I0127 14:03:39.818913  491048 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13568,"bootTime":1737973052,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:03:39.819048  491048 start.go:139] virtualization: kvm guest
	I0127 14:03:39.822020  491048 out.go:97] [download-only-060086] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0127 14:03:39.822214  491048 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20321-483699/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 14:03:39.822278  491048 notify.go:220] Checking for updates...
	I0127 14:03:39.824170  491048 out.go:169] MINIKUBE_LOCATION=20321
	I0127 14:03:39.826050  491048 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:03:39.827461  491048 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 14:03:39.828940  491048 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 14:03:39.830292  491048 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0127 14:03:39.832501  491048 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 14:03:39.832714  491048 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:03:39.873150  491048 out.go:97] Using the kvm2 driver based on user configuration
	I0127 14:03:39.873192  491048 start.go:297] selected driver: kvm2
	I0127 14:03:39.873206  491048 start.go:901] validating driver "kvm2" against <nil>
	I0127 14:03:39.873630  491048 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:03:39.873778  491048 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20321-483699/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0127 14:03:39.890708  491048 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0127 14:03:39.890800  491048 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 14:03:39.891317  491048 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0127 14:03:39.891452  491048 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 14:03:39.891483  491048 cni.go:84] Creating CNI manager for ""
	I0127 14:03:39.891539  491048 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0127 14:03:39.891549  491048 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 14:03:39.891601  491048 start.go:340] cluster config:
	{Name:download-only-060086 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-060086 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:03:39.891807  491048 iso.go:125] acquiring lock: {Name:mk495ab01e09c95181725c4c6bd9514d993529d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 14:03:39.893805  491048 out.go:97] Downloading VM boot image ...
	I0127 14:03:39.893859  491048 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20321-483699/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0127 14:03:42.923140  491048 out.go:97] Starting "download-only-060086" primary control-plane node in "download-only-060086" cluster
	I0127 14:03:42.923177  491048 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 14:03:42.949609  491048 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0127 14:03:42.949646  491048 cache.go:56] Caching tarball of preloaded images
	I0127 14:03:42.949848  491048 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0127 14:03:42.951794  491048 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 14:03:42.951812  491048 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0127 14:03:42.974487  491048 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20321-483699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-060086 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060086"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-060086
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (3.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-867555 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-867555 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (3.984137621s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (3.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 14:03:51.663495  491036 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0127 14:03:51.663551  491036 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20321-483699/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-867555
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-867555: exit status 85 (67.341614ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-060086 | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC |                     |
	|         | -p download-only-060086        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	| delete  | -p download-only-060086        | download-only-060086 | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC | 27 Jan 25 14:03 UTC |
	| start   | -o=json --download-only        | download-only-867555 | jenkins | v1.35.0 | 27 Jan 25 14:03 UTC |                     |
	|         | -p download-only-867555        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 14:03:47
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 14:03:47.722155  491237 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:03:47.722689  491237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:03:47.722716  491237 out.go:358] Setting ErrFile to fd 2...
	I0127 14:03:47.722723  491237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:03:47.723157  491237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:03:47.724098  491237 out.go:352] Setting JSON to true
	I0127 14:03:47.725041  491237 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":13576,"bootTime":1737973052,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:03:47.725157  491237 start.go:139] virtualization: kvm guest
	I0127 14:03:47.726917  491237 out.go:97] [download-only-867555] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:03:47.727056  491237 notify.go:220] Checking for updates...
	I0127 14:03:47.728138  491237 out.go:169] MINIKUBE_LOCATION=20321
	I0127 14:03:47.729591  491237 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:03:47.731013  491237 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 14:03:47.732328  491237 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 14:03:47.733578  491237 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-867555 host does not exist
	  To start a cluster, run: "minikube start -p download-only-867555"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-867555
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 14:03:52.302076  491036 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-478989 --alsologtostderr --binary-mirror http://127.0.0.1:35949 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-478989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-478989
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (87.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-121396 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-121396 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m26.312280542s)
helpers_test.go:175: Cleaning up "offline-containerd-121396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-121396
--- PASS: TestOffline (87.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-384779
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-384779: exit status 85 (57.2301ms)

                                                
                                                
-- stdout --
	* Profile "addons-384779" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384779"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-384779
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-384779: exit status 85 (56.659982ms)

                                                
                                                
-- stdout --
	* Profile "addons-384779" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-384779"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (261.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-384779 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-384779 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m21.031736491s)
--- PASS: TestAddons/Setup (261.03s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 24.942443ms
addons_test.go:815: volcano-admission stabilized in 24.962734ms
addons_test.go:823: volcano-controller stabilized in 25.017811ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-7ff7cd6989-85qpp" [2f6266d8-405c-4d12-9562-bd02eefda7ad] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005045523s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-57676bd54c-79gxr" [88495fb7-9104-479a-8196-4ed3adf402ff] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003944256s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-77df547cdf-58r54" [c5add703-c03d-4e28-a5be-3cfa17a9e6df] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003963108s
addons_test.go:842: (dbg) Run:  kubectl --context addons-384779 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-384779 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-384779 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a7957688-81eb-4e76-a45b-7b0ff48bcb9c] Pending
helpers_test.go:344: "test-job-nginx-0" [a7957688-81eb-4e76-a45b-7b0ff48bcb9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a7957688-81eb-4e76-a45b-7b0ff48bcb9c] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.00469155s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable volcano --alsologtostderr -v=1: (11.099864829s)
--- PASS: TestAddons/serial/Volcano (41.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-384779 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-384779 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-384779 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-384779 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [97993ee9-c88c-4042-948c-31dd8b2c52df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [97993ee9-c88c-4042-948c-31dd8b2c52df] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003478251s
addons_test.go:633: (dbg) Run:  kubectl --context addons-384779 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-384779 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-384779 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 6.791246ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-4dq4q" [4a64dd18-b6e0-4a44-9824-a37f15ed84fd] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005300134s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qrn6q" [d61a60f5-afc7-4d2c-9b94-086a002c0efd] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004785731s
addons_test.go:331: (dbg) Run:  kubectl --context addons-384779 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-384779 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-384779 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.314940377s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 ip
2025/01/27 14:09:29 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-384779 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-384779 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-384779 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [10cbd9e4-756d-4a59-b402-085995eb20d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [10cbd9e4-756d-4a59-b402-085995eb20d6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004079351s
I0127 14:09:46.646173  491036 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-384779 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable ingress-dns --alsologtostderr -v=1: (1.003142244s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable ingress --alsologtostderr -v=1: (7.875635515s)
--- PASS: TestAddons/parallel/Ingress (19.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jfp6n" [4e353543-436d-4900-9924-af49835eebbf] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005356622s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable inspektor-gadget --alsologtostderr -v=1: (5.778306704s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.242865ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-mrs48" [3387aa6c-8392-4889-b654-b070e9fd1dc9] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003501271s
addons_test.go:402: (dbg) Run:  kubectl --context addons-384779 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 14:09:30.182804  491036 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 14:09:30.191027  491036 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 14:09:30.191062  491036 kapi.go:107] duration metric: took 8.282804ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.292531ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-384779 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-384779 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4ff020e5-9c3e-4de5-99e7-185ae669f2de] Pending
helpers_test.go:344: "task-pv-pod" [4ff020e5-9c3e-4de5-99e7-185ae669f2de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4ff020e5-9c3e-4de5-99e7-185ae669f2de] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004917466s
addons_test.go:511: (dbg) Run:  kubectl --context addons-384779 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-384779 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-384779 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-384779 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-384779 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-384779 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-384779 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [18850bf0-fa0b-4dab-a4f7-a5de7140428e] Pending
helpers_test.go:344: "task-pv-pod-restore" [18850bf0-fa0b-4dab-a4f7-a5de7140428e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [18850bf0-fa0b-4dab-a4f7-a5de7140428e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004526201s
addons_test.go:553: (dbg) Run:  kubectl --context addons-384779 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-384779 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-384779 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.806552513s)
--- PASS: TestAddons/parallel/CSI (43.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-384779 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-384779 --alsologtostderr -v=1: (1.052844085s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-4fzfv" [eba287fa-31f0-4d85-89d4-8f9b5a1e9c1f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-4fzfv" [eba287fa-31f0-4d85-89d4-8f9b5a1e9c1f] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.00441114s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable headlamp --alsologtostderr -v=1: (5.996789149s)
--- PASS: TestAddons/parallel/Headlamp (23.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-x7g4c" [11bfa954-0397-4f11-b2bc-2051535f6470] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004476476s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-384779 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-384779 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6d51741f-e31d-4165-944b-fbccf3b95cfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6d51741f-e31d-4165-944b-fbccf3b95cfc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6d51741f-e31d-4165-944b-fbccf3b95cfc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003891073s
addons_test.go:906: (dbg) Run:  kubectl --context addons-384779 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 ssh "cat /opt/local-path-provisioner/pvc-71a9dada-37a2-4f25-b6b4-301f4d108abb_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-384779 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-384779 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vv8pm" [6db7a1f5-77e6-469c-b399-789a0c747d41] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004716904s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-f9hlh" [71fee914-a33e-4e32-a34e-afaac7acc967] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004170463s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-384779 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-384779 addons disable yakd --alsologtostderr -v=1: (5.862571993s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-384779
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-384779: (1m30.968716228s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-384779
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-384779
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-384779
--- PASS: TestAddons/StoppedEnableDisable (91.28s)

                                                
                                    
x
+
TestCertOptions (60.62s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-311142 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-311142 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (59.076330253s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-311142 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-311142 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-311142 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-311142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-311142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-311142: (1.050599777s)
--- PASS: TestCertOptions (60.62s)

                                                
                                    
x
+
TestCertExpiration (261.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-133512 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-133512 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (53.125030465s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-133512 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
E0127 15:10:19.902920  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-133512 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (27.263153556s)
helpers_test.go:175: Cleaning up "cert-expiration-133512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-133512
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-133512: (1.035914829s)
--- PASS: TestCertExpiration (261.42s)

                                                
                                    
x
+
TestForceSystemdFlag (54.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-139474 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-139474 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (53.277351096s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-139474 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-139474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-139474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-139474: (1.113757303s)
--- PASS: TestForceSystemdFlag (54.62s)

                                                
                                    
x
+
TestForceSystemdEnv (56.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-548058 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-548058 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (55.597922438s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-548058 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-548058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-548058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-548058: (1.028437575s)
--- PASS: TestForceSystemdEnv (56.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.94s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0127 15:06:18.129288  491036 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 15:06:18.129462  491036 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0127 15:06:18.162061  491036 install.go:62] docker-machine-driver-kvm2: exit status 1
W0127 15:06:18.162494  491036 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 15:06:18.162576  491036 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate844371904/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.94s)

                                                
                                    
x
+
TestErrorSpam/setup (43.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-951691 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-951691 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-951691 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-951691 --driver=kvm2  --container-runtime=containerd: (43.800197773s)
--- PASS: TestErrorSpam/setup (43.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (3.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 stop: (1.477611467s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-951691 --log_dir /tmp/nospam-951691 stop: (1.313546776s)
--- PASS: TestErrorSpam/stop (3.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20321-483699/.minikube/files/etc/test/nested/copy/491036/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0127 14:13:14.051012  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.057563  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.069115  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.090697  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.132190  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.213726  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.375293  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:14.697114  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:15.339295  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:16.621543  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:19.184500  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:24.306637  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:13:34.548730  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-519899 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (57.037198964s)
--- PASS: TestFunctional/serial/StartWithProxy (57.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 14:13:34.982658  491036 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --alsologtostderr -v=8
E0127 14:13:55.030216  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-519899 --alsologtostderr -v=8: (43.869428543s)
functional_test.go:663: soft start took 43.870111397s for "functional-519899" cluster.
I0127 14:14:18.852446  491036 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (43.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-519899 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 cache add registry.k8s.io/pause:3.3: (1.026813832s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-519899 /tmp/TestFunctionalserialCacheCmdcacheadd_local2298400896/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache add minikube-local-cache-test:functional-519899
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 cache add minikube-local-cache-test:functional-519899: (1.460856142s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache delete minikube-local-cache-test:functional-519899
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-519899
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (216.340162ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 kubectl -- --context functional-519899 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-519899 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 14:14:35.992869  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-519899 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.448627957s)
functional_test.go:761: restart took 46.44875353s for "functional-519899" cluster.
I0127 14:15:12.261760  491036 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (46.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-519899 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 logs: (1.306837921s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 logs --file /tmp/TestFunctionalserialLogsFileCmd1838519790/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 logs --file /tmp/TestFunctionalserialLogsFileCmd1838519790/001/logs.txt: (1.329625161s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-519899 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-519899
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-519899: exit status 115 (281.336897ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.137:32346 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-519899 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-519899 delete -f testdata/invalidsvc.yaml: (1.230118686s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 config get cpus: exit status 14 (71.919975ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 config get cpus: exit status 14 (61.053027ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-519899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (160.083265ms)

                                                
                                                
-- stdout --
	* [functional-519899] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:15:33.558858  498816 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:15:33.559413  498816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.559428  498816 out.go:358] Setting ErrFile to fd 2...
	I0127 14:15:33.559435  498816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.559675  498816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:15:33.560270  498816 out.go:352] Setting JSON to false
	I0127 14:15:33.561407  498816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14282,"bootTime":1737973052,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:15:33.561518  498816 start.go:139] virtualization: kvm guest
	I0127 14:15:33.564927  498816 out.go:177] * [functional-519899] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 14:15:33.566737  498816 notify.go:220] Checking for updates...
	I0127 14:15:33.566745  498816 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:15:33.568382  498816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:15:33.570059  498816 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 14:15:33.571636  498816 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 14:15:33.573024  498816 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:15:33.574399  498816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:15:33.576440  498816 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:15:33.577127  498816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.577212  498816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.594439  498816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38285
	I0127 14:15:33.594969  498816 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.595510  498816 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.595537  498816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.595938  498816 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.596162  498816 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.596477  498816 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:15:33.596917  498816 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.597004  498816 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.613443  498816 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37329
	I0127 14:15:33.614087  498816 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.614685  498816 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.614716  498816 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.615136  498816 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.615348  498816 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.652517  498816 out.go:177] * Using the kvm2 driver based on existing profile
	I0127 14:15:33.653879  498816 start.go:297] selected driver: kvm2
	I0127 14:15:33.653902  498816 start.go:901] validating driver "kvm2" against &{Name:functional-519899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-519899 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:15:33.654035  498816 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:15:33.656262  498816 out.go:201] 
	W0127 14:15:33.657808  498816 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 14:15:33.659158  498816 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-519899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-519899 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (142.408302ms)

                                                
                                                
-- stdout --
	* [functional-519899] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:15:33.855779  498871 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:15:33.855890  498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.855902  498871 out.go:358] Setting ErrFile to fd 2...
	I0127 14:15:33.855908  498871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:15:33.856175  498871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:15:33.856747  498871 out.go:352] Setting JSON to false
	I0127 14:15:33.857773  498871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":14282,"bootTime":1737973052,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 14:15:33.857849  498871 start.go:139] virtualization: kvm guest
	I0127 14:15:33.860051  498871 out.go:177] * [functional-519899] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0127 14:15:33.861491  498871 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 14:15:33.861513  498871 notify.go:220] Checking for updates...
	I0127 14:15:33.864184  498871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 14:15:33.865390  498871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 14:15:33.866559  498871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 14:15:33.867781  498871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 14:15:33.868909  498871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 14:15:33.870723  498871 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:15:33.871108  498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.871189  498871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.887261  498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I0127 14:15:33.887730  498871 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.888303  498871 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.888323  498871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.888701  498871 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.888886  498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.889195  498871 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 14:15:33.889495  498871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:15:33.889535  498871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:15:33.905438  498871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0127 14:15:33.905881  498871 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:15:33.906378  498871 main.go:141] libmachine: Using API Version  1
	I0127 14:15:33.906409  498871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:15:33.906711  498871 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:15:33.906892  498871 main.go:141] libmachine: (functional-519899) Calling .DriverName
	I0127 14:15:33.942169  498871 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0127 14:15:33.943335  498871 start.go:297] selected driver: kvm2
	I0127 14:15:33.943346  498871 start.go:901] validating driver "kvm2" against &{Name:functional-519899 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-519899 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 14:15:33.943448  498871 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 14:15:33.945369  498871 out.go:201] 
	W0127 14:15:33.946502  498871 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 14:15:33.947718  498871 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-519899 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-519899 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-dzvt5" [2127b456-24b1-4d0c-a5ac-bd2698d2facb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-dzvt5" [2127b456-24b1-4d0c-a5ac-bd2698d2facb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004635744s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.137:32423
functional_test.go:1675: http://192.168.39.137:32423: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-dzvt5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.137:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.137:32423
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cce26af7-2c27-4dc0-a0c0-8a1a63b89e8e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004289279s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-519899 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-519899 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-519899 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-519899 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9ad21048-92b8-4b43-a8f3-a7c7597f7771] Pending
helpers_test.go:344: "sp-pod" [9ad21048-92b8-4b43-a8f3-a7c7597f7771] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9ad21048-92b8-4b43-a8f3-a7c7597f7771] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003950088s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-519899 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-519899 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-519899 delete -f testdata/storage-provisioner/pod.yaml: (1.087692316s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-519899 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b4cf0903-688a-4ce5-8745-b55ba1110d7f] Pending
helpers_test.go:344: "sp-pod" [b4cf0903-688a-4ce5-8745-b55ba1110d7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0127 14:15:57.914286  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [b4cf0903-688a-4ce5-8745-b55ba1110d7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.003593789s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-519899 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh -n functional-519899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cp functional-519899:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1146623670/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh -n functional-519899 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh -n functional-519899 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-519899 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-htngb" [3896773c-b0be-414b-901e-7b018511c481] Pending
helpers_test.go:344: "mysql-58ccfd96bb-htngb" [3896773c-b0be-414b-901e-7b018511c481] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-htngb" [3896773c-b0be-414b-901e-7b018511c481] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.006407784s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;": exit status 1 (212.55162ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 14:16:03.351228  491036 retry.go:31] will retry after 1.000579986s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;": exit status 1 (358.488755ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 14:16:04.711462  491036 retry.go:31] will retry after 2.057116624s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;": exit status 1 (140.444528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 14:16:06.909869  491036 retry.go:31] will retry after 2.648240379s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-519899 exec mysql-58ccfd96bb-htngb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/491036/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /etc/test/nested/copy/491036/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/491036.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /etc/ssl/certs/491036.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/491036.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /usr/share/ca-certificates/491036.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4910362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /etc/ssl/certs/4910362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4910362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /usr/share/ca-certificates/4910362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-519899 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh "sudo systemctl is-active docker": exit status 1 (216.50662ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh "sudo systemctl is-active crio": exit status 1 (207.497871ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-519899 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-519899 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-ph7x4" [2e838724-4178-4ca0-b457-87236081912b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-ph7x4" [2e838724-4178-4ca0-b457-87236081912b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003608701s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-519899 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-519899
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kicbase/echo-server:functional-519899
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-519899 image ls --format short --alsologtostderr:
I0127 14:15:44.198003  499463 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:44.198287  499463 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.198303  499463 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:44.198312  499463 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.198636  499463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:44.199371  499463 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.199486  499463 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.199843  499463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.199916  499463 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.216569  499463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
I0127 14:15:44.217109  499463 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.217714  499463 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.217737  499463 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.218190  499463 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.218426  499463 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:44.220547  499463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.220592  499463 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.238633  499463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
I0127 14:15:44.239092  499463 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.239655  499463 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.239681  499463 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.240010  499463 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.240253  499463 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:44.240477  499463 ssh_runner.go:195] Run: systemctl --version
I0127 14:15:44.240508  499463 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:44.243644  499463 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.244148  499463 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:44.244177  499463 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.244424  499463 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:44.244586  499463 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:44.244714  499463 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:44.244901  499463 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:44.320219  499463 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:15:44.361089  499463 main.go:141] libmachine: Making call to close driver server
I0127 14:15:44.361103  499463 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:44.361413  499463 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:44.361432  499463 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:44.361442  499463 main.go:141] libmachine: Making call to close driver server
I0127 14:15:44.361452  499463 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:44.362279  499463 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:44.362326  499463 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:44.362507  499463 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-519899 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| docker.io/kicbase/echo-server               | functional-519899  | sha256:9056ab | 2.37MB |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/minikube-local-cache-test | functional-519899  | sha256:2ef788 | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| docker.io/library/nginx                     | latest             | sha256:9bea9f | 72.1MB |
| localhost/my-image                          | functional-519899  | sha256:d43294 | 775kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-519899 image ls --format table --alsologtostderr:
I0127 14:15:47.923476  500043 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:47.923832  500043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:47.923877  500043 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:47.923895  500043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:47.924252  500043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:47.925177  500043 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:47.925400  500043 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:47.926039  500043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:47.926128  500043 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:47.944187  500043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
I0127 14:15:47.944717  500043 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:47.945378  500043 main.go:141] libmachine: Using API Version  1
I0127 14:15:47.945399  500043 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:47.945764  500043 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:47.945958  500043 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:47.947809  500043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:47.947859  500043 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:47.963943  500043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
I0127 14:15:47.964344  500043 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:47.964949  500043 main.go:141] libmachine: Using API Version  1
I0127 14:15:47.964973  500043 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:47.965451  500043 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:47.965680  500043 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:47.965920  500043 ssh_runner.go:195] Run: systemctl --version
I0127 14:15:47.965954  500043 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:47.968844  500043 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:47.969221  500043 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:47.969246  500043 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:47.969328  500043 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:47.969515  500043 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:47.969626  500043 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:47.969773  500043 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:48.060632  500043 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:15:48.111420  500043 main.go:141] libmachine: Making call to close driver server
I0127 14:15:48.111437  500043 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:48.111681  500043 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:48.111699  500043 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:48.111707  500043 main.go:141] libmachine: Making call to close driver server
I0127 14:15:48.111715  500043 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:48.112981  500043 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:48.113021  500043 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:48.112984  500043 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-519899 image ls --format json --alsologtostderr:
[{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097
fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-519899"],"size":"2372971"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/ku
be-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:2ef788301f5490beeeb9ce0357a01e8989ebe4ae3bb05877c41321c10ceeb282","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-519899"],"size":"992"},{"id":"sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":["docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a"],"repoTags":["docker.io/library/nginx:latest"],"size":"72080558"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["re
gistry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:d432944bf95840b2c8913c638b4cbeb228e8a9b492f2e1183d685d3580c50f66","repoDigests":[],"repoTags":["localhost/my-image:functional-519899"],"size":"774885"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{
"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-519899 image ls --format json --alsologtostderr:
I0127 14:15:47.913472  500037 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:47.918320  500037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:47.918344  500037 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:47.918351  500037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:47.918708  500037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:47.919697  500037 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:47.919862  500037 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:47.920480  500037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:47.920582  500037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:47.938611  500037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
I0127 14:15:47.939126  500037 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:47.939859  500037 main.go:141] libmachine: Using API Version  1
I0127 14:15:47.939880  500037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:47.940263  500037 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:47.940508  500037 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:47.942484  500037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:47.942533  500037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:47.959995  500037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
I0127 14:15:47.960563  500037 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:47.961085  500037 main.go:141] libmachine: Using API Version  1
I0127 14:15:47.961105  500037 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:47.961486  500037 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:47.961718  500037 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:47.961927  500037 ssh_runner.go:195] Run: systemctl --version
I0127 14:15:47.961961  500037 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:47.965196  500037 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:47.965618  500037 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:47.965687  500037 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:47.965950  500037 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:47.966125  500037 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:47.966250  500037 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:47.966413  500037 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:48.054197  500037 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:15:48.104685  500037 main.go:141] libmachine: Making call to close driver server
I0127 14:15:48.104707  500037 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:48.105017  500037 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:48.105041  500037 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:48.105052  500037 main.go:141] libmachine: Making call to close driver server
I0127 14:15:48.105061  500037 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:48.105342  500037 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:48.105369  500037 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-519899 image ls --format yaml --alsologtostderr:
- id: sha256:2ef788301f5490beeeb9ce0357a01e8989ebe4ae3bb05877c41321c10ceeb282
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-519899
size: "992"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests:
- docker.io/library/nginx@sha256:0a399eb16751829e1af26fea27b20c3ec28d7ab1fb72182879dcae1cca21206a
repoTags:
- docker.io/library/nginx:latest
size: "72080558"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-519899
size: "2372971"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-519899 image ls --format yaml --alsologtostderr:
I0127 14:15:44.421212  499518 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:44.421357  499518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.421372  499518 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:44.421379  499518 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.421768  499518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:44.422611  499518 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.422712  499518 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.423132  499518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.423201  499518 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.445749  499518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
I0127 14:15:44.446294  499518 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.447036  499518 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.447072  499518 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.447526  499518 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.447742  499518 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:44.450070  499518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.450134  499518 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.466298  499518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
I0127 14:15:44.466897  499518 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.467498  499518 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.467525  499518 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.467893  499518 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.468076  499518 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:44.468357  499518 ssh_runner.go:195] Run: systemctl --version
I0127 14:15:44.468390  499518 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:44.471051  499518 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.471567  499518 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:44.471592  499518 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.471701  499518 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:44.471885  499518 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:44.472044  499518 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:44.472199  499518 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:44.547736  499518 ssh_runner.go:195] Run: sudo crictl images --output json
I0127 14:15:44.584187  499518 main.go:141] libmachine: Making call to close driver server
I0127 14:15:44.584200  499518 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:44.584518  499518 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:44.584557  499518 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:44.584570  499518 main.go:141] libmachine: Making call to close driver server
I0127 14:15:44.584582  499518 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:44.584887  499518 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:44.584938  499518 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:44.584951  499518 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh pgrep buildkitd: exit status 1 (213.36584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image build -t localhost/my-image:functional-519899 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 image build -t localhost/my-image:functional-519899 testdata/build --alsologtostderr: (2.762316823s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-519899 image build -t localhost/my-image:functional-519899 testdata/build --alsologtostderr:
I0127 14:15:44.853393  499647 out.go:345] Setting OutFile to fd 1 ...
I0127 14:15:44.854224  499647 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.854237  499647 out.go:358] Setting ErrFile to fd 2...
I0127 14:15:44.854241  499647 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 14:15:44.854467  499647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
I0127 14:15:44.855165  499647 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.855745  499647 config.go:182] Loaded profile config "functional-519899": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0127 14:15:44.856148  499647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.856194  499647 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.872407  499647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40801
I0127 14:15:44.872932  499647 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.873535  499647 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.873567  499647 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.873963  499647 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.874182  499647 main.go:141] libmachine: (functional-519899) Calling .GetState
I0127 14:15:44.875852  499647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0127 14:15:44.875889  499647 main.go:141] libmachine: Launching plugin server for driver kvm2
I0127 14:15:44.890638  499647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40155
I0127 14:15:44.891124  499647 main.go:141] libmachine: () Calling .GetVersion
I0127 14:15:44.891625  499647 main.go:141] libmachine: Using API Version  1
I0127 14:15:44.891644  499647 main.go:141] libmachine: () Calling .SetConfigRaw
I0127 14:15:44.891971  499647 main.go:141] libmachine: () Calling .GetMachineName
I0127 14:15:44.892168  499647 main.go:141] libmachine: (functional-519899) Calling .DriverName
I0127 14:15:44.892374  499647 ssh_runner.go:195] Run: systemctl --version
I0127 14:15:44.892411  499647 main.go:141] libmachine: (functional-519899) Calling .GetSSHHostname
I0127 14:15:44.895240  499647 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.895670  499647 main.go:141] libmachine: (functional-519899) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:be:ed", ip: ""} in network mk-functional-519899: {Iface:virbr1 ExpiryTime:2025-01-27 15:12:52 +0000 UTC Type:0 Mac:52:54:00:7e:be:ed Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:functional-519899 Clientid:01:52:54:00:7e:be:ed}
I0127 14:15:44.895710  499647 main.go:141] libmachine: (functional-519899) DBG | domain functional-519899 has defined IP address 192.168.39.137 and MAC address 52:54:00:7e:be:ed in network mk-functional-519899
I0127 14:15:44.895872  499647 main.go:141] libmachine: (functional-519899) Calling .GetSSHPort
I0127 14:15:44.896060  499647 main.go:141] libmachine: (functional-519899) Calling .GetSSHKeyPath
I0127 14:15:44.896233  499647 main.go:141] libmachine: (functional-519899) Calling .GetSSHUsername
I0127 14:15:44.896374  499647 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/functional-519899/id_rsa Username:docker}
I0127 14:15:44.972113  499647 build_images.go:161] Building image from path: /tmp/build.806961038.tar
I0127 14:15:44.972188  499647 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 14:15:44.982880  499647 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.806961038.tar
I0127 14:15:44.987207  499647 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.806961038.tar: stat -c "%s %y" /var/lib/minikube/build/build.806961038.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.806961038.tar': No such file or directory
I0127 14:15:44.987249  499647 ssh_runner.go:362] scp /tmp/build.806961038.tar --> /var/lib/minikube/build/build.806961038.tar (3072 bytes)
I0127 14:15:45.012210  499647 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.806961038
I0127 14:15:45.021607  499647 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.806961038 -xf /var/lib/minikube/build/build.806961038.tar
I0127 14:15:45.030984  499647 containerd.go:394] Building image: /var/lib/minikube/build/build.806961038
I0127 14:15:45.031056  499647 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.806961038 --local dockerfile=/var/lib/minikube/build/build.806961038 --output type=image,name=localhost/my-image:functional-519899
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:bc9c06a0938470749cff093a024d5700b54f861bfafe4c4db903ba9217fd458e
#8 exporting manifest sha256:bc9c06a0938470749cff093a024d5700b54f861bfafe4c4db903ba9217fd458e 0.0s done
#8 exporting config sha256:d432944bf95840b2c8913c638b4cbeb228e8a9b492f2e1183d685d3580c50f66 0.0s done
#8 naming to localhost/my-image:functional-519899 done
#8 DONE 0.2s
I0127 14:15:47.525484  499647 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.806961038 --local dockerfile=/var/lib/minikube/build/build.806961038 --output type=image,name=localhost/my-image:functional-519899: (2.494389721s)
I0127 14:15:47.525565  499647 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.806961038
I0127 14:15:47.542373  499647 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.806961038.tar
I0127 14:15:47.559633  499647 build_images.go:217] Built localhost/my-image:functional-519899 from /tmp/build.806961038.tar
I0127 14:15:47.559671  499647 build_images.go:133] succeeded building to: functional-519899
I0127 14:15:47.559676  499647 build_images.go:134] failed building to: 
I0127 14:15:47.559701  499647 main.go:141] libmachine: Making call to close driver server
I0127 14:15:47.559710  499647 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:47.560037  499647 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:47.560072  499647 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:47.560077  499647 main.go:141] libmachine: Making call to close connection to plugin binary
I0127 14:15:47.560095  499647 main.go:141] libmachine: Making call to close driver server
I0127 14:15:47.560104  499647 main.go:141] libmachine: (functional-519899) Calling .Close
I0127 14:15:47.560379  499647 main.go:141] libmachine: (functional-519899) DBG | Closing plugin on server side
I0127 14:15:47.560411  499647 main.go:141] libmachine: Successfully made call to close driver server
I0127 14:15:47.560434  499647 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (6.897119813s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-519899
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image load --daemon kicbase/echo-server:functional-519899 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 image load --daemon kicbase/echo-server:functional-519899 --alsologtostderr: (1.040574071s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service list -o json
functional_test.go:1494: Took "276.639229ms" to run "out/minikube-linux-amd64 -p functional-519899 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.137:31025
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image load --daemon kicbase/echo-server:functional-519899 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-519899 image load --daemon kicbase/echo-server:functional-519899 --alsologtostderr: (1.111084453s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.137:31025
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "377.645747ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "57.639259ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdany-port1769431418/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1737987331440322228" to /tmp/TestFunctionalparallelMountCmdany-port1769431418/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1737987331440322228" to /tmp/TestFunctionalparallelMountCmdany-port1769431418/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1737987331440322228" to /tmp/TestFunctionalparallelMountCmdany-port1769431418/001/test-1737987331440322228
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.985121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:15:31.703705  491036 retry.go:31] will retry after 647.377713ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 27 14:15 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 27 14:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 27 14:15 test-1737987331440322228
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh cat /mount-9p/test-1737987331440322228
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-519899 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [eeb33ffb-4745-4aff-8600-d7989436ca01] Pending
helpers_test.go:344: "busybox-mount" [eeb33ffb-4745-4aff-8600-d7989436ca01] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [eeb33ffb-4745-4aff-8600-d7989436ca01] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [eeb33ffb-4745-4aff-8600-d7989436ca01] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.003655965s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-519899 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdany-port1769431418/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "288.158259ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.450546ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (3.240933422s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-519899
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image load --daemon kicbase/echo-server:functional-519899 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image save kicbase/echo-server:functional-519899 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image rm kicbase/echo-server:functional-519899 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-519899
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 image save --daemon kicbase/echo-server:functional-519899 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-519899
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdspecific-port2985820407/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.166514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0127 14:15:45.330020  491036 retry.go:31] will retry after 670.621812ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdspecific-port2985820407/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-519899 ssh "sudo umount -f /mount-9p": exit status 1 (212.309306ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-519899 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdspecific-port2985820407/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-519899 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-519899 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-519899 /tmp/TestFunctionalparallelMountCmdVerifyCleanup225071385/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.82s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-519899
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-519899
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-519899
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-771782 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 14:18:14.046712  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:18:41.756326  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-771782 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m34.67624846s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-771782 -- rollout status deployment/busybox: (3.163267047s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-7hcqh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-dd2rf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-zpmbh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-7hcqh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-dd2rf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-zpmbh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-7hcqh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-dd2rf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-zpmbh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-7hcqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-7hcqh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-dd2rf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-dd2rf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-zpmbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-771782 -- exec busybox-58667487b6-zpmbh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-771782 -v=7 --alsologtostderr
E0127 14:20:19.904031  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:19.910460  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:19.921864  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:19.943348  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:19.984809  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:20.066396  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:20.227981  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:20.549740  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:21.191559  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:22.473136  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:25.034994  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:30.157138  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:20:40.399603  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-771782 -v=7 --alsologtostderr: (57.027455616s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-771782 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0127 14:21:00.881639  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp testdata/cp-test.txt ha-771782:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1045189208/001/cp-test_ha-771782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782:/home/docker/cp-test.txt ha-771782-m02:/home/docker/cp-test_ha-771782_ha-771782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test_ha-771782_ha-771782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782:/home/docker/cp-test.txt ha-771782-m03:/home/docker/cp-test_ha-771782_ha-771782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test_ha-771782_ha-771782-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782:/home/docker/cp-test.txt ha-771782-m04:/home/docker/cp-test_ha-771782_ha-771782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test_ha-771782_ha-771782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp testdata/cp-test.txt ha-771782-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1045189208/001/cp-test_ha-771782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m02:/home/docker/cp-test.txt ha-771782:/home/docker/cp-test_ha-771782-m02_ha-771782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test_ha-771782-m02_ha-771782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m02:/home/docker/cp-test.txt ha-771782-m03:/home/docker/cp-test_ha-771782-m02_ha-771782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test_ha-771782-m02_ha-771782-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m02:/home/docker/cp-test.txt ha-771782-m04:/home/docker/cp-test_ha-771782-m02_ha-771782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test_ha-771782-m02_ha-771782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp testdata/cp-test.txt ha-771782-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1045189208/001/cp-test_ha-771782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m03:/home/docker/cp-test.txt ha-771782:/home/docker/cp-test_ha-771782-m03_ha-771782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test_ha-771782-m03_ha-771782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m03:/home/docker/cp-test.txt ha-771782-m02:/home/docker/cp-test_ha-771782-m03_ha-771782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test_ha-771782-m03_ha-771782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m03:/home/docker/cp-test.txt ha-771782-m04:/home/docker/cp-test_ha-771782-m03_ha-771782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test_ha-771782-m03_ha-771782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp testdata/cp-test.txt ha-771782-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1045189208/001/cp-test_ha-771782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m04:/home/docker/cp-test.txt ha-771782:/home/docker/cp-test_ha-771782-m04_ha-771782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782 "sudo cat /home/docker/cp-test_ha-771782-m04_ha-771782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m04:/home/docker/cp-test.txt ha-771782-m02:/home/docker/cp-test_ha-771782-m04_ha-771782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m02 "sudo cat /home/docker/cp-test_ha-771782-m04_ha-771782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 cp ha-771782-m04:/home/docker/cp-test.txt ha-771782-m03:/home/docker/cp-test_ha-771782-m04_ha-771782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 ssh -n ha-771782-m03 "sudo cat /home/docker/cp-test_ha-771782-m04_ha-771782-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 node stop m02 -v=7 --alsologtostderr
E0127 14:21:41.843981  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-771782 node stop m02 -v=7 --alsologtostderr: (1m31.004169787s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr: exit status 7 (685.532795ms)

                                                
                                                
-- stdout --
	ha-771782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-771782-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-771782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:22:45.966551  505259 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:22:45.966749  505259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:22:45.966758  505259 out.go:358] Setting ErrFile to fd 2...
	I0127 14:22:45.966762  505259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:22:45.966955  505259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:22:45.967165  505259 out.go:352] Setting JSON to false
	I0127 14:22:45.967199  505259 mustload.go:65] Loading cluster: ha-771782
	I0127 14:22:45.967264  505259 notify.go:220] Checking for updates...
	I0127 14:22:45.967629  505259 config.go:182] Loaded profile config "ha-771782": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:22:45.967652  505259 status.go:174] checking status of ha-771782 ...
	I0127 14:22:45.968031  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:45.968075  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:45.989902  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39247
	I0127 14:22:45.990393  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:45.990992  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:45.991015  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:45.991401  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:45.991603  505259 main.go:141] libmachine: (ha-771782) Calling .GetState
	I0127 14:22:45.993234  505259 status.go:371] ha-771782 host status = "Running" (err=<nil>)
	I0127 14:22:45.993254  505259 host.go:66] Checking if "ha-771782" exists ...
	I0127 14:22:45.993549  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:45.993590  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.009376  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0127 14:22:46.009986  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.010567  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.010598  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.010969  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.011194  505259 main.go:141] libmachine: (ha-771782) Calling .GetIP
	I0127 14:22:46.014550  505259 main.go:141] libmachine: (ha-771782) DBG | domain ha-771782 has defined MAC address 52:54:00:14:f7:75 in network mk-ha-771782
	I0127 14:22:46.015043  505259 main.go:141] libmachine: (ha-771782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f7:75", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:16:34 +0000 UTC Type:0 Mac:52:54:00:14:f7:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-771782 Clientid:01:52:54:00:14:f7:75}
	I0127 14:22:46.015073  505259 main.go:141] libmachine: (ha-771782) DBG | domain ha-771782 has defined IP address 192.168.39.110 and MAC address 52:54:00:14:f7:75 in network mk-ha-771782
	I0127 14:22:46.015199  505259 host.go:66] Checking if "ha-771782" exists ...
	I0127 14:22:46.015611  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.015663  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.031325  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0127 14:22:46.031797  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.032317  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.032344  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.032684  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.032864  505259 main.go:141] libmachine: (ha-771782) Calling .DriverName
	I0127 14:22:46.033172  505259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:22:46.033205  505259 main.go:141] libmachine: (ha-771782) Calling .GetSSHHostname
	I0127 14:22:46.036466  505259 main.go:141] libmachine: (ha-771782) DBG | domain ha-771782 has defined MAC address 52:54:00:14:f7:75 in network mk-ha-771782
	I0127 14:22:46.036970  505259 main.go:141] libmachine: (ha-771782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:f7:75", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:16:34 +0000 UTC Type:0 Mac:52:54:00:14:f7:75 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-771782 Clientid:01:52:54:00:14:f7:75}
	I0127 14:22:46.037012  505259 main.go:141] libmachine: (ha-771782) DBG | domain ha-771782 has defined IP address 192.168.39.110 and MAC address 52:54:00:14:f7:75 in network mk-ha-771782
	I0127 14:22:46.037187  505259 main.go:141] libmachine: (ha-771782) Calling .GetSSHPort
	I0127 14:22:46.037411  505259 main.go:141] libmachine: (ha-771782) Calling .GetSSHKeyPath
	I0127 14:22:46.037683  505259 main.go:141] libmachine: (ha-771782) Calling .GetSSHUsername
	I0127 14:22:46.037864  505259 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/ha-771782/id_rsa Username:docker}
	I0127 14:22:46.126388  505259 ssh_runner.go:195] Run: systemctl --version
	I0127 14:22:46.133257  505259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:22:46.152261  505259 kubeconfig.go:125] found "ha-771782" server: "https://192.168.39.254:8443"
	I0127 14:22:46.152312  505259 api_server.go:166] Checking apiserver status ...
	I0127 14:22:46.152370  505259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:46.169182  505259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0127 14:22:46.179953  505259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:22:46.180043  505259 ssh_runner.go:195] Run: ls
	I0127 14:22:46.184928  505259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 14:22:46.189883  505259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 14:22:46.189916  505259 status.go:463] ha-771782 apiserver status = Running (err=<nil>)
	I0127 14:22:46.189927  505259 status.go:176] ha-771782 status: &{Name:ha-771782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:22:46.189949  505259 status.go:174] checking status of ha-771782-m02 ...
	I0127 14:22:46.190328  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.190371  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.206010  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I0127 14:22:46.206556  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.207200  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.207223  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.207586  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.207834  505259 main.go:141] libmachine: (ha-771782-m02) Calling .GetState
	I0127 14:22:46.209693  505259 status.go:371] ha-771782-m02 host status = "Stopped" (err=<nil>)
	I0127 14:22:46.209712  505259 status.go:384] host is not running, skipping remaining checks
	I0127 14:22:46.209721  505259 status.go:176] ha-771782-m02 status: &{Name:ha-771782-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:22:46.209748  505259 status.go:174] checking status of ha-771782-m03 ...
	I0127 14:22:46.210167  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.210242  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.226739  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0127 14:22:46.227353  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.227955  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.227978  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.228331  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.228554  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetState
	I0127 14:22:46.230393  505259 status.go:371] ha-771782-m03 host status = "Running" (err=<nil>)
	I0127 14:22:46.230410  505259 host.go:66] Checking if "ha-771782-m03" exists ...
	I0127 14:22:46.230740  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.230788  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.247834  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0127 14:22:46.248367  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.248900  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.248933  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.249320  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.249514  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetIP
	I0127 14:22:46.252788  505259 main.go:141] libmachine: (ha-771782-m03) DBG | domain ha-771782-m03 has defined MAC address 52:54:00:84:43:8f in network mk-ha-771782
	I0127 14:22:46.253258  505259 main.go:141] libmachine: (ha-771782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:43:8f", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:84:43:8f Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-771782-m03 Clientid:01:52:54:00:84:43:8f}
	I0127 14:22:46.253292  505259 main.go:141] libmachine: (ha-771782-m03) DBG | domain ha-771782-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:84:43:8f in network mk-ha-771782
	I0127 14:22:46.253431  505259 host.go:66] Checking if "ha-771782-m03" exists ...
	I0127 14:22:46.253876  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.253927  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.271906  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0127 14:22:46.272613  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.273207  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.273241  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.273592  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.273827  505259 main.go:141] libmachine: (ha-771782-m03) Calling .DriverName
	I0127 14:22:46.274017  505259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:22:46.274059  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetSSHHostname
	I0127 14:22:46.277344  505259 main.go:141] libmachine: (ha-771782-m03) DBG | domain ha-771782-m03 has defined MAC address 52:54:00:84:43:8f in network mk-ha-771782
	I0127 14:22:46.277880  505259 main.go:141] libmachine: (ha-771782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:43:8f", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:18:53 +0000 UTC Type:0 Mac:52:54:00:84:43:8f Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-771782-m03 Clientid:01:52:54:00:84:43:8f}
	I0127 14:22:46.277911  505259 main.go:141] libmachine: (ha-771782-m03) DBG | domain ha-771782-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:84:43:8f in network mk-ha-771782
	I0127 14:22:46.278056  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetSSHPort
	I0127 14:22:46.278243  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetSSHKeyPath
	I0127 14:22:46.278403  505259 main.go:141] libmachine: (ha-771782-m03) Calling .GetSSHUsername
	I0127 14:22:46.278572  505259 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/ha-771782-m03/id_rsa Username:docker}
	I0127 14:22:46.370412  505259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:22:46.388044  505259 kubeconfig.go:125] found "ha-771782" server: "https://192.168.39.254:8443"
	I0127 14:22:46.388076  505259 api_server.go:166] Checking apiserver status ...
	I0127 14:22:46.388109  505259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:22:46.405120  505259 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0127 14:22:46.415890  505259 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:22:46.415953  505259 ssh_runner.go:195] Run: ls
	I0127 14:22:46.421166  505259 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0127 14:22:46.426224  505259 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0127 14:22:46.426258  505259 status.go:463] ha-771782-m03 apiserver status = Running (err=<nil>)
	I0127 14:22:46.426270  505259 status.go:176] ha-771782-m03 status: &{Name:ha-771782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:22:46.426294  505259 status.go:174] checking status of ha-771782-m04 ...
	I0127 14:22:46.426740  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.426799  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.443033  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0127 14:22:46.443531  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.444108  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.444136  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.444574  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.444791  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetState
	I0127 14:22:46.446662  505259 status.go:371] ha-771782-m04 host status = "Running" (err=<nil>)
	I0127 14:22:46.446683  505259 host.go:66] Checking if "ha-771782-m04" exists ...
	I0127 14:22:46.447091  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.447147  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.463507  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35021
	I0127 14:22:46.464018  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.464577  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.464600  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.465034  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.465249  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetIP
	I0127 14:22:46.468572  505259 main.go:141] libmachine: (ha-771782-m04) DBG | domain ha-771782-m04 has defined MAC address 52:54:00:91:33:db in network mk-ha-771782
	I0127 14:22:46.468970  505259 main.go:141] libmachine: (ha-771782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:33:db", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:20:18 +0000 UTC Type:0 Mac:52:54:00:91:33:db Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-771782-m04 Clientid:01:52:54:00:91:33:db}
	I0127 14:22:46.468990  505259 main.go:141] libmachine: (ha-771782-m04) DBG | domain ha-771782-m04 has defined IP address 192.168.39.37 and MAC address 52:54:00:91:33:db in network mk-ha-771782
	I0127 14:22:46.469190  505259 host.go:66] Checking if "ha-771782-m04" exists ...
	I0127 14:22:46.469523  505259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:22:46.469570  505259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:22:46.486066  505259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0127 14:22:46.486739  505259 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:22:46.487417  505259 main.go:141] libmachine: Using API Version  1
	I0127 14:22:46.487459  505259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:22:46.487900  505259 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:22:46.488139  505259 main.go:141] libmachine: (ha-771782-m04) Calling .DriverName
	I0127 14:22:46.488401  505259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:22:46.488436  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetSSHHostname
	I0127 14:22:46.491162  505259 main.go:141] libmachine: (ha-771782-m04) DBG | domain ha-771782-m04 has defined MAC address 52:54:00:91:33:db in network mk-ha-771782
	I0127 14:22:46.491529  505259 main.go:141] libmachine: (ha-771782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:33:db", ip: ""} in network mk-ha-771782: {Iface:virbr1 ExpiryTime:2025-01-27 15:20:18 +0000 UTC Type:0 Mac:52:54:00:91:33:db Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-771782-m04 Clientid:01:52:54:00:91:33:db}
	I0127 14:22:46.491569  505259 main.go:141] libmachine: (ha-771782-m04) DBG | domain ha-771782-m04 has defined IP address 192.168.39.37 and MAC address 52:54:00:91:33:db in network mk-ha-771782
	I0127 14:22:46.491693  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetSSHPort
	I0127 14:22:46.491882  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetSSHKeyPath
	I0127 14:22:46.492014  505259 main.go:141] libmachine: (ha-771782-m04) Calling .GetSSHUsername
	I0127 14:22:46.492142  505259 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/ha-771782-m04/id_rsa Username:docker}
	I0127 14:22:46.582411  505259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:22:46.598728  505259 status.go:176] ha-771782-m04 status: &{Name:ha-771782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 node start m02 -v=7 --alsologtostderr
E0127 14:23:03.766368  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:23:14.046540  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-771782 node start m02 -v=7 --alsologtostderr: (39.33687053s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (432.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-771782 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-771782 -v=7 --alsologtostderr
E0127 14:25:19.903182  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:25:47.607761  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-771782 -v=7 --alsologtostderr: (4m33.988144652s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-771782 --wait=true -v=7 --alsologtostderr
E0127 14:28:14.046471  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:29:37.120235  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:30:19.903483  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-771782 --wait=true -v=7 --alsologtostderr: (2m38.873379401s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-771782
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (432.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-771782 node delete m03 -v=7 --alsologtostderr: (6.183843101s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 stop -v=7 --alsologtostderr
E0127 14:33:14.046990  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:35:19.904004  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-771782 stop -v=7 --alsologtostderr: (4m32.855119428s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr: exit status 7 (123.002838ms)

                                                
                                                
-- stdout --
	ha-771782
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771782-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-771782-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:35:21.951661  509127 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:35:21.951983  509127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:35:21.951995  509127 out.go:358] Setting ErrFile to fd 2...
	I0127 14:35:21.952000  509127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:35:21.952189  509127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:35:21.952392  509127 out.go:352] Setting JSON to false
	I0127 14:35:21.952431  509127 mustload.go:65] Loading cluster: ha-771782
	I0127 14:35:21.952556  509127 notify.go:220] Checking for updates...
	I0127 14:35:21.952839  509127 config.go:182] Loaded profile config "ha-771782": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:35:21.952865  509127 status.go:174] checking status of ha-771782 ...
	I0127 14:35:21.954110  509127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:35:21.954205  509127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:35:21.977704  509127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46615
	I0127 14:35:21.978307  509127 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:35:21.978906  509127 main.go:141] libmachine: Using API Version  1
	I0127 14:35:21.978928  509127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:35:21.979378  509127 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:35:21.979651  509127 main.go:141] libmachine: (ha-771782) Calling .GetState
	I0127 14:35:21.981577  509127 status.go:371] ha-771782 host status = "Stopped" (err=<nil>)
	I0127 14:35:21.981593  509127 status.go:384] host is not running, skipping remaining checks
	I0127 14:35:21.981600  509127 status.go:176] ha-771782 status: &{Name:ha-771782 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:35:21.981653  509127 status.go:174] checking status of ha-771782-m02 ...
	I0127 14:35:21.981992  509127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:35:21.982040  509127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:35:21.997947  509127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0127 14:35:21.998389  509127 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:35:21.998852  509127 main.go:141] libmachine: Using API Version  1
	I0127 14:35:21.998875  509127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:35:21.999212  509127 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:35:21.999391  509127 main.go:141] libmachine: (ha-771782-m02) Calling .GetState
	I0127 14:35:22.001343  509127 status.go:371] ha-771782-m02 host status = "Stopped" (err=<nil>)
	I0127 14:35:22.001359  509127 status.go:384] host is not running, skipping remaining checks
	I0127 14:35:22.001366  509127 status.go:176] ha-771782-m02 status: &{Name:ha-771782-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:35:22.001384  509127 status.go:174] checking status of ha-771782-m04 ...
	I0127 14:35:22.001737  509127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:35:22.001781  509127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:35:22.017530  509127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33203
	I0127 14:35:22.018023  509127 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:35:22.018574  509127 main.go:141] libmachine: Using API Version  1
	I0127 14:35:22.018598  509127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:35:22.018894  509127 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:35:22.019068  509127 main.go:141] libmachine: (ha-771782-m04) Calling .GetState
	I0127 14:35:22.020600  509127 status.go:371] ha-771782-m04 host status = "Stopped" (err=<nil>)
	I0127 14:35:22.020620  509127 status.go:384] host is not running, skipping remaining checks
	I0127 14:35:22.020627  509127 status.go:176] ha-771782-m04 status: &{Name:ha-771782-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-771782 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 14:36:42.969438  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-771782 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.679314356s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-771782 --control-plane -v=7 --alsologtostderr
E0127 14:38:14.046187  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-771782 --control-plane -v=7 --alsologtostderr: (1m30.244343925s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-771782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (91.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-223252 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-223252 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (58.412101381s)
--- PASS: TestJSONOutput/start/Command (58.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-223252 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-223252 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-223252 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-223252 --output=json --user=testUser: (6.459097553s)
--- PASS: TestJSONOutput/stop/Command (6.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-013871 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-013871 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.953392ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"25e09b6b-1cf6-4118-9a8b-8a146d809d52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-013871] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74063745-2ba1-4e03-a5bf-f57c9069c468","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20321"}}
	{"specversion":"1.0","id":"c55bf0a7-448f-40b6-9aa2-aeb13d177de6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b0f4989-7dbf-4587-8ffc-937046f84af7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig"}}
	{"specversion":"1.0","id":"984e5ae4-ec40-4699-a30d-8735913cab79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube"}}
	{"specversion":"1.0","id":"39188217-d87f-4770-8517-234dbfd99b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"372e6f26-e8b2-437c-b83b-1dea6bd077ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e95d590-1e76-42b8-ba30-f63b2319a902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-013871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-013871
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (96.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-925376 --driver=kvm2  --container-runtime=containerd
E0127 14:40:19.908619  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-925376 --driver=kvm2  --container-runtime=containerd: (50.109084063s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-937961 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-937961 --driver=kvm2  --container-runtime=containerd: (43.856384204s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-925376
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-937961
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-937961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-937961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-937961: (1.00274878s)
helpers_test.go:175: Cleaning up "first-925376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-925376
--- PASS: TestMinikubeProfile (96.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-008445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-008445 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.094619094s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-008445 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-008445 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-026142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-026142 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.762103023s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-008445 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-026142
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-026142: (1.288384436s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-026142
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-026142: (22.143712042s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-026142 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-836441 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 14:43:14.047151  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-836441 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m55.981941742s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-836441 -- rollout status deployment/busybox: (2.588092059s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-8hkbc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-fwr56 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-8hkbc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-fwr56 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-8hkbc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-fwr56 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-8hkbc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-8hkbc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-fwr56 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-836441 -- exec busybox-58667487b6-fwr56 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-836441 -v 3 --alsologtostderr
E0127 14:45:19.903072  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-836441 -v 3 --alsologtostderr: (52.685049553s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-836441 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp testdata/cp-test.txt multinode-836441:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3131732686/001/cp-test_multinode-836441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441:/home/docker/cp-test.txt multinode-836441-m02:/home/docker/cp-test_multinode-836441_multinode-836441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test_multinode-836441_multinode-836441-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441:/home/docker/cp-test.txt multinode-836441-m03:/home/docker/cp-test_multinode-836441_multinode-836441-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test_multinode-836441_multinode-836441-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp testdata/cp-test.txt multinode-836441-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3131732686/001/cp-test_multinode-836441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m02:/home/docker/cp-test.txt multinode-836441:/home/docker/cp-test_multinode-836441-m02_multinode-836441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test_multinode-836441-m02_multinode-836441.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m02:/home/docker/cp-test.txt multinode-836441-m03:/home/docker/cp-test_multinode-836441-m02_multinode-836441-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test_multinode-836441-m02_multinode-836441-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp testdata/cp-test.txt multinode-836441-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3131732686/001/cp-test_multinode-836441-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m03:/home/docker/cp-test.txt multinode-836441:/home/docker/cp-test_multinode-836441-m03_multinode-836441.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441 "sudo cat /home/docker/cp-test_multinode-836441-m03_multinode-836441.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 cp multinode-836441-m03:/home/docker/cp-test.txt multinode-836441-m02:/home/docker/cp-test_multinode-836441-m03_multinode-836441-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 ssh -n multinode-836441-m02 "sudo cat /home/docker/cp-test_multinode-836441-m03_multinode-836441-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-836441 node stop m03: (1.369400632s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-836441 status: exit status 7 (419.868987ms)

                                                
                                                
-- stdout --
	multinode-836441
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-836441-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-836441-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr: exit status 7 (424.517679ms)

                                                
                                                
-- stdout --
	multinode-836441
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-836441-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-836441-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:45:58.303736  516826 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:45:58.303988  516826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:45:58.303997  516826 out.go:358] Setting ErrFile to fd 2...
	I0127 14:45:58.304001  516826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:45:58.304211  516826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:45:58.304379  516826 out.go:352] Setting JSON to false
	I0127 14:45:58.304410  516826 mustload.go:65] Loading cluster: multinode-836441
	I0127 14:45:58.304543  516826 notify.go:220] Checking for updates...
	I0127 14:45:58.304837  516826 config.go:182] Loaded profile config "multinode-836441": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:45:58.304859  516826 status.go:174] checking status of multinode-836441 ...
	I0127 14:45:58.305297  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.305334  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.321526  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I0127 14:45:58.322067  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.322741  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.322770  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.323122  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.323310  516826 main.go:141] libmachine: (multinode-836441) Calling .GetState
	I0127 14:45:58.324899  516826 status.go:371] multinode-836441 host status = "Running" (err=<nil>)
	I0127 14:45:58.324919  516826 host.go:66] Checking if "multinode-836441" exists ...
	I0127 14:45:58.325216  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.325254  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.340922  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
	I0127 14:45:58.341326  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.341782  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.341802  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.342115  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.342309  516826 main.go:141] libmachine: (multinode-836441) Calling .GetIP
	I0127 14:45:58.345083  516826 main.go:141] libmachine: (multinode-836441) DBG | domain multinode-836441 has defined MAC address 52:54:00:73:2c:4c in network mk-multinode-836441
	I0127 14:45:58.345495  516826 main.go:141] libmachine: (multinode-836441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:2c:4c", ip: ""} in network mk-multinode-836441: {Iface:virbr1 ExpiryTime:2025-01-27 15:43:08 +0000 UTC Type:0 Mac:52:54:00:73:2c:4c Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:multinode-836441 Clientid:01:52:54:00:73:2c:4c}
	I0127 14:45:58.345523  516826 main.go:141] libmachine: (multinode-836441) DBG | domain multinode-836441 has defined IP address 192.168.39.223 and MAC address 52:54:00:73:2c:4c in network mk-multinode-836441
	I0127 14:45:58.345686  516826 host.go:66] Checking if "multinode-836441" exists ...
	I0127 14:45:58.346002  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.346045  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.362125  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0127 14:45:58.362631  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.363250  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.363272  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.363587  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.363787  516826 main.go:141] libmachine: (multinode-836441) Calling .DriverName
	I0127 14:45:58.363969  516826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:45:58.364001  516826 main.go:141] libmachine: (multinode-836441) Calling .GetSSHHostname
	I0127 14:45:58.366944  516826 main.go:141] libmachine: (multinode-836441) DBG | domain multinode-836441 has defined MAC address 52:54:00:73:2c:4c in network mk-multinode-836441
	I0127 14:45:58.367368  516826 main.go:141] libmachine: (multinode-836441) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:2c:4c", ip: ""} in network mk-multinode-836441: {Iface:virbr1 ExpiryTime:2025-01-27 15:43:08 +0000 UTC Type:0 Mac:52:54:00:73:2c:4c Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:multinode-836441 Clientid:01:52:54:00:73:2c:4c}
	I0127 14:45:58.367405  516826 main.go:141] libmachine: (multinode-836441) DBG | domain multinode-836441 has defined IP address 192.168.39.223 and MAC address 52:54:00:73:2c:4c in network mk-multinode-836441
	I0127 14:45:58.367530  516826 main.go:141] libmachine: (multinode-836441) Calling .GetSSHPort
	I0127 14:45:58.367718  516826 main.go:141] libmachine: (multinode-836441) Calling .GetSSHKeyPath
	I0127 14:45:58.367875  516826 main.go:141] libmachine: (multinode-836441) Calling .GetSSHUsername
	I0127 14:45:58.368025  516826 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/multinode-836441/id_rsa Username:docker}
	I0127 14:45:58.444928  516826 ssh_runner.go:195] Run: systemctl --version
	I0127 14:45:58.450632  516826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:45:58.464415  516826 kubeconfig.go:125] found "multinode-836441" server: "https://192.168.39.223:8443"
	I0127 14:45:58.464461  516826 api_server.go:166] Checking apiserver status ...
	I0127 14:45:58.464510  516826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 14:45:58.477538  516826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup
	W0127 14:45:58.487691  516826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 14:45:58.487756  516826 ssh_runner.go:195] Run: ls
	I0127 14:45:58.492531  516826 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I0127 14:45:58.496887  516826 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I0127 14:45:58.496920  516826 status.go:463] multinode-836441 apiserver status = Running (err=<nil>)
	I0127 14:45:58.496934  516826 status.go:176] multinode-836441 status: &{Name:multinode-836441 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:45:58.496952  516826 status.go:174] checking status of multinode-836441-m02 ...
	I0127 14:45:58.497401  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.497456  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.514647  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44403
	I0127 14:45:58.515067  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.515559  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.515582  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.515900  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.516108  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetState
	I0127 14:45:58.517504  516826 status.go:371] multinode-836441-m02 host status = "Running" (err=<nil>)
	I0127 14:45:58.517521  516826 host.go:66] Checking if "multinode-836441-m02" exists ...
	I0127 14:45:58.517888  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.517926  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.535058  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0127 14:45:58.535638  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.536178  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.536202  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.536516  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.536695  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetIP
	I0127 14:45:58.539580  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | domain multinode-836441-m02 has defined MAC address 52:54:00:57:d0:5f in network mk-multinode-836441
	I0127 14:45:58.539965  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:d0:5f", ip: ""} in network mk-multinode-836441: {Iface:virbr1 ExpiryTime:2025-01-27 15:44:09 +0000 UTC Type:0 Mac:52:54:00:57:d0:5f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-836441-m02 Clientid:01:52:54:00:57:d0:5f}
	I0127 14:45:58.539988  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | domain multinode-836441-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:57:d0:5f in network mk-multinode-836441
	I0127 14:45:58.540183  516826 host.go:66] Checking if "multinode-836441-m02" exists ...
	I0127 14:45:58.540505  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.540545  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.556591  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33393
	I0127 14:45:58.557028  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.557616  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.557638  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.557972  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.558213  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .DriverName
	I0127 14:45:58.558408  516826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 14:45:58.558428  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetSSHHostname
	I0127 14:45:58.561429  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | domain multinode-836441-m02 has defined MAC address 52:54:00:57:d0:5f in network mk-multinode-836441
	I0127 14:45:58.561906  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:d0:5f", ip: ""} in network mk-multinode-836441: {Iface:virbr1 ExpiryTime:2025-01-27 15:44:09 +0000 UTC Type:0 Mac:52:54:00:57:d0:5f Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-836441-m02 Clientid:01:52:54:00:57:d0:5f}
	I0127 14:45:58.561932  516826 main.go:141] libmachine: (multinode-836441-m02) DBG | domain multinode-836441-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:57:d0:5f in network mk-multinode-836441
	I0127 14:45:58.562203  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetSSHPort
	I0127 14:45:58.562390  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetSSHKeyPath
	I0127 14:45:58.562537  516826 main.go:141] libmachine: (multinode-836441-m02) Calling .GetSSHUsername
	I0127 14:45:58.562693  516826 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20321-483699/.minikube/machines/multinode-836441-m02/id_rsa Username:docker}
	I0127 14:45:58.644728  516826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 14:45:58.658654  516826 status.go:176] multinode-836441-m02 status: &{Name:multinode-836441-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:45:58.658711  516826 status.go:174] checking status of multinode-836441-m03 ...
	I0127 14:45:58.659081  516826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:45:58.659127  516826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:45:58.676037  516826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0127 14:45:58.676510  516826 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:45:58.677014  516826 main.go:141] libmachine: Using API Version  1
	I0127 14:45:58.677038  516826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:45:58.677353  516826 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:45:58.677592  516826 main.go:141] libmachine: (multinode-836441-m03) Calling .GetState
	I0127 14:45:58.679098  516826 status.go:371] multinode-836441-m03 host status = "Stopped" (err=<nil>)
	I0127 14:45:58.679114  516826 status.go:384] host is not running, skipping remaining checks
	I0127 14:45:58.679120  516826 status.go:176] multinode-836441-m03 status: &{Name:multinode-836441-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 node start m03 -v=7 --alsologtostderr
E0127 14:46:17.122117  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-836441 node start m03 -v=7 --alsologtostderr: (33.824740897s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-836441
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-836441
E0127 14:48:14.049468  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-836441: (3m2.728049726s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-836441 --wait=true -v=8 --alsologtostderr
E0127 14:50:19.903457  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-836441 --wait=true -v=8 --alsologtostderr: (2m9.285470791s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-836441
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-836441 node delete m03: (1.720155523s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (182.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 stop
E0127 14:53:14.049600  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 14:53:22.973572  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-836441 stop: (3m1.906796874s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-836441 status: exit status 7 (92.586527ms)

                                                
                                                
-- stdout --
	multinode-836441
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-836441-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr: exit status 7 (87.636997ms)

                                                
                                                
-- stdout --
	multinode-836441
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-836441-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 14:54:49.593295  519525 out.go:345] Setting OutFile to fd 1 ...
	I0127 14:54:49.593430  519525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:54:49.593443  519525 out.go:358] Setting ErrFile to fd 2...
	I0127 14:54:49.593450  519525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 14:54:49.593700  519525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 14:54:49.593881  519525 out.go:352] Setting JSON to false
	I0127 14:54:49.593918  519525 mustload.go:65] Loading cluster: multinode-836441
	I0127 14:54:49.594010  519525 notify.go:220] Checking for updates...
	I0127 14:54:49.594361  519525 config.go:182] Loaded profile config "multinode-836441": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 14:54:49.594386  519525 status.go:174] checking status of multinode-836441 ...
	I0127 14:54:49.594804  519525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:54:49.594861  519525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:54:49.610735  519525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0127 14:54:49.611171  519525 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:54:49.611858  519525 main.go:141] libmachine: Using API Version  1
	I0127 14:54:49.611896  519525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:54:49.612251  519525 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:54:49.612464  519525 main.go:141] libmachine: (multinode-836441) Calling .GetState
	I0127 14:54:49.614259  519525 status.go:371] multinode-836441 host status = "Stopped" (err=<nil>)
	I0127 14:54:49.614274  519525 status.go:384] host is not running, skipping remaining checks
	I0127 14:54:49.614280  519525 status.go:176] multinode-836441 status: &{Name:multinode-836441 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 14:54:49.614310  519525 status.go:174] checking status of multinode-836441-m02 ...
	I0127 14:54:49.614598  519525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0127 14:54:49.614639  519525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0127 14:54:49.629637  519525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45753
	I0127 14:54:49.630085  519525 main.go:141] libmachine: () Calling .GetVersion
	I0127 14:54:49.630550  519525 main.go:141] libmachine: Using API Version  1
	I0127 14:54:49.630571  519525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 14:54:49.630874  519525 main.go:141] libmachine: () Calling .GetMachineName
	I0127 14:54:49.631041  519525 main.go:141] libmachine: (multinode-836441-m02) Calling .GetState
	I0127 14:54:49.632354  519525 status.go:371] multinode-836441-m02 host status = "Stopped" (err=<nil>)
	I0127 14:54:49.632378  519525 status.go:384] host is not running, skipping remaining checks
	I0127 14:54:49.632385  519525 status.go:176] multinode-836441-m02 status: &{Name:multinode-836441-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (182.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-836441 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0127 14:55:19.903975  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-836441 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m31.293968352s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-836441 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-836441
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-836441-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-836441-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (69.556975ms)

                                                
                                                
-- stdout --
	* [multinode-836441-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-836441-m02' is duplicated with machine name 'multinode-836441-m02' in profile 'multinode-836441'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-836441-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-836441-m03 --driver=kvm2  --container-runtime=containerd: (42.611303867s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-836441
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-836441: exit status 80 (225.011714ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-836441 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-836441-m03 already exists in multinode-836441-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-836441-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.80s)

                                                
                                    
x
+
TestPreload (226.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-825371 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0127 14:58:14.046627  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-825371 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.883842377s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-825371 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-825371 image pull gcr.io/k8s-minikube/busybox: (1.653612216s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-825371
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-825371: (1m30.981164313s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-825371 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0127 15:00:19.903006  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-825371 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (49.238511276s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-825371 image list
helpers_test.go:175: Cleaning up "test-preload-825371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-825371
--- PASS: TestPreload (226.90s)

                                                
                                    
x
+
TestScheduledStopUnix (116.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-475354 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-475354 --memory=2048 --driver=kvm2  --container-runtime=containerd: (44.890892281s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475354 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-475354 -n scheduled-stop-475354
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475354 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0127 15:01:39.103496  491036 retry.go:31] will retry after 124.316µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.104705  491036 retry.go:31] will retry after 77.592µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.105879  491036 retry.go:31] will retry after 161.916µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.107089  491036 retry.go:31] will retry after 357.134µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.108275  491036 retry.go:31] will retry after 355.703µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.109455  491036 retry.go:31] will retry after 709.77µs: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.110618  491036 retry.go:31] will retry after 1.347504ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.112899  491036 retry.go:31] will retry after 1.286803ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.115191  491036 retry.go:31] will retry after 1.97266ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.117434  491036 retry.go:31] will retry after 4.743274ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.122690  491036 retry.go:31] will retry after 8.642682ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.131989  491036 retry.go:31] will retry after 11.688335ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.144253  491036 retry.go:31] will retry after 8.112338ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.152467  491036 retry.go:31] will retry after 14.750917ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.167732  491036 retry.go:31] will retry after 25.128744ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
I0127 15:01:39.194024  491036 retry.go:31] will retry after 61.772277ms: open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/scheduled-stop-475354/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475354 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475354 -n scheduled-stop-475354
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-475354
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-475354 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-475354
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-475354: exit status 7 (78.057167ms)

                                                
                                                
-- stdout --
	scheduled-stop-475354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475354 -n scheduled-stop-475354
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-475354 -n scheduled-stop-475354: exit status 7 (70.328818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-475354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-475354
--- PASS: TestScheduledStopUnix (116.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (183.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3679013842 start -p running-upgrade-279284 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0127 15:02:57.124144  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:03:14.048452  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3679013842 start -p running-upgrade-279284 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m12.605551038s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-279284 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-279284 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (48.719897184s)
helpers_test.go:175: Cleaning up "running-upgrade-279284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-279284
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-279284: (1.25683669s)
--- PASS: TestRunningBinaryUpgrade (183.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (138.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 15:10:45.991634  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:56.233940  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.684539114s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-969741
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-969741: (1.464924298s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-969741 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-969741 status --format={{.Host}}: exit status 7 (69.597185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0127 15:11:57.678047  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (37.601632786s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-969741 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (95.455634ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-969741] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-969741
	    minikube start -p kubernetes-upgrade-969741 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9697412 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-969741 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-969741 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (32.137257167s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-969741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-969741
--- PASS: TestKubernetesUpgrade (138.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (92.527812ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-142334] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-737870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-737870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m44.919214031s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-142334 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-142334 --driver=kvm2  --container-runtime=containerd: (1m44.137197598s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-142334 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.41s)

                                                
                                    
x
+
TestPause/serial/Start (78.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-521230 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-521230 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m18.73038159s)
--- PASS: TestPause/serial/Start (78.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (36.122437704s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-142334 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-142334 status -o json: exit status 2 (288.770084ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-142334","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-142334
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-142334: (1.328038603s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0127 15:05:19.903329  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-142334 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.059204301s)
--- PASS: TestNoKubernetes/serial/Start (28.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-737870 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61e7a174-6ccc-406b-af46-ce55707179b8] Pending
helpers_test.go:344: "busybox" [61e7a174-6ccc-406b-af46-ce55707179b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61e7a174-6ccc-406b-af46-ce55707179b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004152964s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-737870 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-521230 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-521230 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.810822507s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-142334 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-142334 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.262266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.0896131s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-142334
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-142334: (1.321007078s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-142334 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-142334 --driver=kvm2  --container-runtime=containerd: (21.956515919s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-737870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-737870 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.513943572s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-737870 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-737870 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-737870 --alsologtostderr -v=3: (1m31.823893374s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-142334 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-142334 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.685972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-536540 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-536540 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (111.706208ms)

                                                
                                                
-- stdout --
	* [false-536540] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20321
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 15:06:10.871431  526770 out.go:345] Setting OutFile to fd 1 ...
	I0127 15:06:10.871600  526770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:06:10.871612  526770 out.go:358] Setting ErrFile to fd 2...
	I0127 15:06:10.871619  526770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 15:06:10.871834  526770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20321-483699/.minikube/bin
	I0127 15:06:10.872541  526770 out.go:352] Setting JSON to false
	I0127 15:06:10.873716  526770 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":17319,"bootTime":1737973052,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0127 15:06:10.873833  526770 start.go:139] virtualization: kvm guest
	I0127 15:06:10.876391  526770 out.go:177] * [false-536540] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0127 15:06:10.877927  526770 notify.go:220] Checking for updates...
	I0127 15:06:10.877960  526770 out.go:177]   - MINIKUBE_LOCATION=20321
	I0127 15:06:10.879306  526770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 15:06:10.880764  526770 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20321-483699/kubeconfig
	I0127 15:06:10.882265  526770 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20321-483699/.minikube
	I0127 15:06:10.883498  526770 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0127 15:06:10.884728  526770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 15:06:10.886328  526770 config.go:182] Loaded profile config "force-systemd-env-548058": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 15:06:10.886464  526770 config.go:182] Loaded profile config "old-k8s-version-737870": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0127 15:06:10.886602  526770 config.go:182] Loaded profile config "pause-521230": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0127 15:06:10.886721  526770 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 15:06:10.925164  526770 out.go:177] * Using the kvm2 driver based on user configuration
	I0127 15:06:10.926512  526770 start.go:297] selected driver: kvm2
	I0127 15:06:10.926550  526770 start.go:901] validating driver "kvm2" against <nil>
	I0127 15:06:10.926568  526770 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 15:06:10.928393  526770 out.go:201] 
	W0127 15:06:10.929579  526770 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0127 15:06:10.930705  526770 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-536540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.167:8443
name: old-k8s-version-737870
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:05:26 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.62:8443
name: pause-521230
contexts:
- context:
cluster: old-k8s-version-737870
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-737870
name: old-k8s-version-737870
- context:
cluster: pause-521230
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:05:26 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-521230
name: pause-521230
current-context: ""
kind: Config
preferences: {}
users:
- name: old-k8s-version-737870
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.key
- name: pause-521230
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-536540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536540"

                                                
                                                
----------------------- debugLogs end: false-536540 [took: 3.167962182s] --------------------------------
helpers_test.go:175: Cleaning up "false-536540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-536540
--- PASS: TestNetworkPlugins/group/false (3.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-521230 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-521230 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-521230 --output=json --layout=cluster: exit status 2 (256.469094ms)

                                                
                                                
-- stdout --
	{"Name":"pause-521230","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-521230","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-521230 --alsologtostderr -v=5
I0127 15:06:18.655401  491036 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate844371904/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00078a950 gz:0xc00078a958 tar:0xc00078a8f0 tar.bz2:0xc00078a900 tar.gz:0xc00078a910 tar.xz:0xc00078a930 tar.zst:0xc00078a940 tbz2:0xc00078a900 tgz:0xc00078a910 txz:0xc00078a930 tzst:0xc00078a940 xz:0xc00078a990 zip:0xc00078a9a0 zst:0xc00078a998] Getters:map[file:0xc000a43600 http:0xc0008a40f0 https:0xc0008a4140] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 15:06:18.655450  491036 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate844371904/001/docker-machine-driver-kvm2
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-521230 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-521230 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-521230 --alsologtostderr -v=5: (1.052564536s)
--- PASS: TestPause/serial/DeletePaused (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (67.71s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
I0127 15:06:21.455251  491036 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0127 15:06:21.455341  491036 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0127 15:06:21.498152  491036 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0127 15:06:21.498188  491036 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0127 15:06:21.498254  491036 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0127 15:06:21.498283  491036 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate844371904/002/docker-machine-driver-kvm2
I0127 15:06:21.849929  491036 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate844371904/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0 0x530a6a0] Decompressors:map[bz2:0xc00078a950 gz:0xc00078a958 tar:0xc00078a8f0 tar.bz2:0xc00078a900 tar.gz:0xc00078a910 tar.xz:0xc00078a930 tar.zst:0xc00078a940 tbz2:0xc00078a900 tgz:0xc00078a910 txz:0xc00078a930 tzst:0xc00078a940 xz:0xc00078a990 zip:0xc00078a9a0 zst:0xc00078a998] Getters:map[file:0xc0019b71d0 http:0xc00070f950 https:0xc00070f9a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0127 15:06:21.849977  491036 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate844371904/002/docker-machine-driver-kvm2
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1m7.708667481s)
--- PASS: TestPause/serial/VerifyDeletedResources (67.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737870 -n old-k8s-version-737870
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737870 -n old-k8s-version-737870: exit status 7 (69.665641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-737870 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (171.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-737870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-737870 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m50.919057528s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-737870 -n old-k8s-version-737870
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (171.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-115279 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:08:14.045883  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-115279 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m41.519022464s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-723981 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-723981 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m10.14071635s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-115279 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5482b733-bff0-4ffa-a2b6-d085a16488d7] Pending
helpers_test.go:344: "busybox" [5482b733-bff0-4ffa-a2b6-d085a16488d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5482b733-bff0-4ffa-a2b6-d085a16488d7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006081149s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-115279 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-115279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-115279 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041661968s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-115279 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-115279 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-115279 --alsologtostderr -v=3: (1m31.27927488s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-723981 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e5f7ba5-4713-4e1d-815f-21364fb04889] Pending
helpers_test.go:344: "busybox" [4e5f7ba5-4713-4e1d-815f-21364fb04889] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4e5f7ba5-4713-4e1d-815f-21364fb04889] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004684166s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-723981 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-723981 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-723981 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-723981 --alsologtostderr -v=3
E0127 15:10:02.975150  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-723981 --alsologtostderr -v=3: (1m31.519207666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mxwql" [bd306237-9eca-480b-bfe2-be099da0c1d1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004190388s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mxwql" [bd306237-9eca-480b-bfe2-be099da0c1d1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008296393s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-737870 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-737870 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-737870 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737870 -n old-k8s-version-737870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737870 -n old-k8s-version-737870: exit status 2 (268.457567ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737870 -n old-k8s-version-737870
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737870 -n old-k8s-version-737870: exit status 2 (256.849851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-737870 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-737870 -n old-k8s-version-737870
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-737870 -n old-k8s-version-737870
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-158506 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:10:35.738020  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:35.744503  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:35.756074  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:35.777681  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:35.819283  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:35.900879  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:36.062457  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:36.384248  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:37.025825  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:38.307915  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:10:40.870119  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-158506 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (56.842787341s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115279 -n no-preload-115279
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115279 -n no-preload-115279: exit status 7 (77.962689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-115279 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (305.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-115279 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:11:16.715995  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-115279 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m4.724814597s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-115279 -n no-preload-115279
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (305.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-723981 -n embed-certs-723981
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-723981 -n embed-certs-723981: exit status 7 (93.323284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-723981 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (311.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-723981 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-723981 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m11.614046618s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-723981 -n embed-certs-723981
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (311.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-158506 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0af8898e-e5e1-41e5-a190-9d3ce2459ae8] Pending
helpers_test.go:344: "busybox" [0af8898e-e5e1-41e5-a190-9d3ce2459ae8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0af8898e-e5e1-41e5-a190-9d3ce2459ae8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.005391502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-158506 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-158506 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-158506 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-158506 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-158506 --alsologtostderr -v=3: (1m31.435249762s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1944044523 start -p stopped-upgrade-850029 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1944044523 start -p stopped-upgrade-850029 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (50.541755312s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1944044523 -p stopped-upgrade-850029 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1944044523 -p stopped-upgrade-850029 stop: (1.306330404s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-850029 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-850029 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.500109269s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506: exit status 7 (78.667356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-158506 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (318.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-158506 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:13:14.045920  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:13:19.600296  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-158506 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (5m17.749357337s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (318.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-850029
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-406933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:15:19.903462  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-406933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (49.050061858s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-406933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-406933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117377076s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-406933 --alsologtostderr -v=3
E0127 15:15:35.737747  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-406933 --alsologtostderr -v=3: (7.322779698s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406933 -n newest-cni-406933
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406933 -n newest-cni-406933: exit status 7 (77.56615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-406933 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-406933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1
E0127 15:16:03.441907  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-406933 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.1: (33.765130562s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-406933 -n newest-cni-406933
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qsnf7" [f0f24e8e-695d-4507-91a7-ff37cfe72b25] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.008508332s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-406933 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-406933 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406933 -n newest-cni-406933
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406933 -n newest-cni-406933: exit status 2 (247.624393ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406933 -n newest-cni-406933
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406933 -n newest-cni-406933: exit status 2 (249.08396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-406933 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-406933 -n newest-cni-406933
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-406933 -n newest-cni-406933
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (56.793936764s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qsnf7" [f0f24e8e-695d-4507-91a7-ff37cfe72b25] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004239654s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-115279 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-115279 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-115279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115279 -n no-preload-115279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115279 -n no-preload-115279: exit status 2 (261.683302ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-115279 -n no-preload-115279
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-115279 -n no-preload-115279: exit status 2 (306.43322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-115279 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-115279 -n no-preload-115279
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-115279 -n no-preload-115279
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m31.24567251s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wcktc" [295b0737-99cf-406b-b583-2b41a0a4c848] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004303736s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wcktc" [295b0737-99cf-406b-b583-2b41a0a4c848] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004595733s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-723981 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-723981 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-723981 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-723981 -n embed-certs-723981
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-723981 -n embed-certs-723981: exit status 2 (275.513429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-723981 -n embed-certs-723981
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-723981 -n embed-certs-723981: exit status 2 (276.032622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-723981 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-723981 -n embed-certs-723981
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-723981 -n embed-certs-723981
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m25.640683866s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-536540 "pgrep -a kubelet"
I0127 15:17:15.554994  491036 config.go:182] Loaded profile config "auto-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-hml8q" [af0cda58-88df-40f9-afd9-2e3279c43d43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-hml8q" [af0cda58-88df-40f9-afd9-2e3279c43d43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004804412s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m0.514436824s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kz82c" [6d8c2723-828d-4a90-95fb-14fbcfc3fdf6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005718704s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-536540 "pgrep -a kubelet"
I0127 15:18:08.580291  491036 config.go:182] Loaded profile config "flannel-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tgzff" [df922e90-ddeb-4d09-a9a4-bfa50c33d84d] Pending
helpers_test.go:344: "netcat-5d86dc444-tgzff" [df922e90-ddeb-4d09-a9a4-bfa50c33d84d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004258191s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-536540 "pgrep -a kubelet"
I0127 15:18:13.985694  491036 config.go:182] Loaded profile config "enable-default-cni-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-536540 replace --force -f testdata/netcat-deployment.yaml
E0127 15:18:14.046670  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zpxqk" [0a82b037-43ec-4286-96c2-a0765bb94216] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zpxqk" [0a82b037-43ec-4286-96c2-a0765bb94216] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004764475s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ghshq" [1c908171-f0b8-4bb9-8395-23a74cd5f9b8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004715706s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-ghshq" [1c908171-f0b8-4bb9-8395-23a74cd5f9b8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003849513s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-158506 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-158506 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-158506 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-158506 --alsologtostderr -v=1: (1.057501392s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506: exit status 2 (286.891296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506: exit status 2 (350.761412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-158506 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-158506 -n default-k8s-diff-port-158506
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.50s)
E0127 15:19:26.675003  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.681491  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.692874  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.714374  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.755934  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.837533  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:26.999570  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:27.321362  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:27.963557  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:29.244978  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:31.807040  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:36.928644  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:37.126283  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/addons-384779/client.crt: no such file or directory" logger="UnhandledError"
E0127 15:19:47.170675  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m27.966634755s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m33.543299211s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (127.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-536540 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m7.523609156s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (127.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-536540 "pgrep -a kubelet"
I0127 15:18:45.593821  491036 config.go:182] Loaded profile config "bridge-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8l8pd" [2c33d4c3-c021-4d7d-821f-003764dc31c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8l8pd" [2c33d4c3-c021-4d7d-821f-003764dc31c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00397099s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9zch2" [9c03a29e-c653-4de1-85a0-17182bda568b] Running
E0127 15:20:07.653047  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/no-preload-115279/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005626533s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-536540 "pgrep -a kubelet"
I0127 15:20:13.155271  491036 config.go:182] Loaded profile config "calico-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g62r5" [c692e437-fecd-4dc0-8383-852ee0cda1a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g62r5" [c692e437-fecd-4dc0-8383-852ee0cda1a2] Running
E0127 15:20:19.903188  491036 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/functional-519899/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005268523s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dqrgk" [b95fee13-555f-4b5d-a6a2-497e7e35207e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005512744s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-536540 "pgrep -a kubelet"
I0127 15:20:20.549725  491036 config.go:182] Loaded profile config "kindnet-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8jg5n" [659baad2-1f65-4b3c-b48a-c43b73c53e8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8jg5n" [659baad2-1f65-4b3c-b48a-c43b73c53e8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004748732s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-536540 "pgrep -a kubelet"
I0127 15:20:51.363845  491036 config.go:182] Loaded profile config "custom-flannel-536540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-536540 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-s4r28" [724c6282-3751-4669-bfdd-156a23543e49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-s4r28" [724c6282-3751-4669-bfdd-156a23543e49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004229605s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-536540 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-536540 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    

Test skip (38/328)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.16
279 TestNetworkPlugins/group/kubenet 3.31
287 TestNetworkPlugins/group/cilium 3.85
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-641949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-641949
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-536540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.167:8443
name: old-k8s-version-737870
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:05:26 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.62:8443
name: pause-521230
contexts:
- context:
cluster: old-k8s-version-737870
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-737870
name: old-k8s-version-737870
- context:
cluster: pause-521230
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:05:26 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-521230
name: pause-521230
current-context: ""
kind: Config
preferences: {}
users:
- name: old-k8s-version-737870
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.key
- name: pause-521230
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-536540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536540"

                                                
                                                
----------------------- debugLogs end: kubenet-536540 [took: 3.137963901s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-536540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-536540
--- SKIP: TestNetworkPlugins/group/kubenet (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-536540 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-536540" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.39.167:8443
name: old-k8s-version-737870
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20321-483699/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:06:14 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.62:8443
name: pause-521230
contexts:
- context:
cluster: old-k8s-version-737870
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:03:59 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-737870
name: old-k8s-version-737870
- context:
cluster: pause-521230
extensions:
- extension:
last-update: Mon, 27 Jan 2025 15:06:14 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-521230
name: pause-521230
current-context: pause-521230
kind: Config
preferences: {}
users:
- name: old-k8s-version-737870
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/old-k8s-version-737870/client.key
- name: pause-521230
user:
client-certificate: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.crt
client-key: /home/jenkins/minikube-integration/20321-483699/.minikube/profiles/pause-521230/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-536540

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-536540" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536540"

                                                
                                                
----------------------- debugLogs end: cilium-536540 [took: 3.681778338s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-536540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-536540
--- SKIP: TestNetworkPlugins/group/cilium (3.85s)

                                                
                                    
Copied to clipboard