Test Report: KVM_Linux_containerd 19184

                    
                      3e3b94e96544f72da351cd649c60e3a6cb2f9512:2024-07-03:35156
                    
                

Test fail (1/326)

Order failed test Duration
88 TestFunctional/parallel/DashboardCmd 5.34
x
+
TestFunctional/parallel/DashboardCmd (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-502505 --alsologtostderr -v=1] stderr:
I0703 04:30:43.639286   19291 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:43.639626   19291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.639638   19291 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:43.639644   19291 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:43.639887   19291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:43.640189   19291 mustload.go:65] Loading cluster: functional-502505
I0703 04:30:43.640641   19291 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:43.641238   19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.641285   19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.655991   19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
I0703 04:30:43.656418   19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.656939   19291 main.go:141] libmachine: Using API Version  1
I0703 04:30:43.656961   19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.657290   19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.657479   19291 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:43.658886   19291 host.go:66] Checking if "functional-502505" exists ...
I0703 04:30:43.659196   19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.659242   19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.673621   19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37199
I0703 04:30:43.674026   19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.674424   19291 main.go:141] libmachine: Using API Version  1
I0703 04:30:43.674447   19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.674775   19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.674987   19291 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.675126   19291 api_server.go:166] Checking apiserver status ...
I0703 04:30:43.675204   19291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0703 04:30:43.675243   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:43.677793   19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.678216   19291 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:43.678249   19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.678407   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:43.678578   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:43.678741   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:43.678900   19291 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:43.768679   19291 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4157/cgroup
W0703 04:30:43.778252   19291 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4157/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0703 04:30:43.778324   19291 ssh_runner.go:195] Run: ls
I0703 04:30:43.783015   19291 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8441/healthz ...
I0703 04:30:43.787107   19291 api_server.go:279] https://192.168.39.7:8441/healthz returned 200:
ok
W0703 04:30:43.787144   19291 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0703 04:30:43.787292   19291 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:43.787308   19291 addons.go:69] Setting dashboard=true in profile "functional-502505"
I0703 04:30:43.787318   19291 addons.go:234] Setting addon dashboard=true in "functional-502505"
I0703 04:30:43.787348   19291 host.go:66] Checking if "functional-502505" exists ...
I0703 04:30:43.787648   19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.787688   19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.802431   19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
I0703 04:30:43.802792   19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.803277   19291 main.go:141] libmachine: Using API Version  1
I0703 04:30:43.803304   19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.803623   19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.804047   19291 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:43.804082   19291 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:43.818507   19291 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41567
I0703 04:30:43.818870   19291 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:43.819281   19291 main.go:141] libmachine: Using API Version  1
I0703 04:30:43.819303   19291 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:43.819616   19291 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:43.819784   19291 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:43.821200   19291 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:43.823567   19291 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0703 04:30:43.825140   19291 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0703 04:30:43.826448   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0703 04:30:43.826462   19291 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0703 04:30:43.826479   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:43.828874   19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.829199   19291 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:43.829226   19291 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:43.829328   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:43.829491   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:43.829616   19291 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:43.829749   19291 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:43.960880   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0703 04:30:43.960908   19291 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0703 04:30:43.998156   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0703 04:30:43.998187   19291 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0703 04:30:44.022167   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0703 04:30:44.022191   19291 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0703 04:30:44.040119   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0703 04:30:44.040144   19291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0703 04:30:44.057560   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0703 04:30:44.057586   19291 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0703 04:30:44.075572   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0703 04:30:44.075596   19291 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0703 04:30:44.093339   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0703 04:30:44.093364   19291 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0703 04:30:44.111881   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0703 04:30:44.111902   19291 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0703 04:30:44.129647   19291 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0703 04:30:44.129670   19291 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0703 04:30:44.146537   19291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0703 04:30:45.803783   19291 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.657173748s)
I0703 04:30:45.803903   19291 main.go:141] libmachine: Making call to close driver server
I0703 04:30:45.803929   19291 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:45.804193   19291 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:45.804216   19291 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:45.804225   19291 main.go:141] libmachine: Making call to close driver server
I0703 04:30:45.804233   19291 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:45.804506   19291 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:45.804510   19291 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:30:45.804523   19291 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:45.806334   19291 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-502505 addons enable metrics-server

                                                
                                                
I0703 04:30:45.808113   19291 addons.go:197] Writing out "functional-502505" config to set dashboard=true...
W0703 04:30:45.808347   19291 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0703 04:30:45.809208   19291 kapi.go:59] client config for functional-502505: &rest.Config{Host:"https://192.168.39.7:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt", KeyFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.key", CAFile:"/home/jenkins/minikube-integration/19184-3680/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfc5a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0703 04:30:45.830220   19291 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  f516666f-9709-4c64-886e-e779c2a2620c 818 0 2024-07-03 04:30:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-07-03 04:30:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.128.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.128.190],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0703 04:30:45.830338   19291 out.go:239] * Launching proxy ...
* Launching proxy ...
I0703 04:30:45.830398   19291 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-502505 proxy --port 36195]
I0703 04:30:45.830639   19291 dashboard.go:157] Waiting for kubectl to output host:port ...
I0703 04:30:45.894919   19291 out.go:177] 
W0703 04:30:45.896434   19291 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0703 04:30:45.896453   19291 out.go:239] * 
* 
W0703 04:30:45.899394   19291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0703 04:30:45.900931   19291 out.go:177] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-502505 -n functional-502505
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 logs -n 25: (2.206268632s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-502505 service list                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | -o json                                                  |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh findmnt                            | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | -T /mount3                                               |                   |         |         |                     |                     |
	| service   | functional-502505 service                                | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | --namespace=default --https                              |                   |         |         |                     |                     |
	|           | --url hello-node                                         |                   |         |         |                     |                     |
	| image     | functional-502505 image load --daemon                    | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | gcr.io/google-containers/addon-resizer:functional-502505 |                   |         |         |                     |                     |
	|           | --alsologtostderr                                        |                   |         |         |                     |                     |
	| mount     | -p functional-502505                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC |                     |
	|           | --kill=true                                              |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /etc/test/nested/copy/10844/hosts                        |                   |         |         |                     |                     |
	| service   | functional-502505                                        | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | service hello-node --url                                 |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                         |                   |         |         |                     |                     |
	| service   | functional-502505 service                                | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | hello-node --url                                         |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /etc/ssl/certs/10844.pem                                 |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /usr/share/ca-certificates/10844.pem                     |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /etc/ssl/certs/51391683.0                                |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /etc/ssl/certs/108442.pem                                |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /usr/share/ca-certificates/108442.pem                    |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh sudo cat                           | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | /etc/ssl/certs/3ec20f2e.0                                |                   |         |         |                     |                     |
	| cp        | functional-502505 cp                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | testdata/cp-test.txt                                     |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                 |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh -n                                 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | functional-502505 sudo cat                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                 |                   |         |         |                     |                     |
	| cp        | functional-502505 cp                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | functional-502505:/home/docker/cp-test.txt               |                   |         |         |                     |                     |
	|           | /tmp/TestFunctionalparallelCpCmd55629243/001/cp-test.txt |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh -n                                 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | functional-502505 sudo cat                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                 |                   |         |         |                     |                     |
	| cp        | functional-502505 cp                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | testdata/cp-test.txt                                     |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                          |                   |         |         |                     |                     |
	| ssh       | functional-502505 ssh -n                                 | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	|           | functional-502505 sudo cat                               |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                          |                   |         |         |                     |                     |
	| start     | -p functional-502505                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC |                     |
	|           | --dry-run --memory                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                           |                   |         |         |                     |                     |
	| start     | -p functional-502505                                     | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC |                     |
	|           | --dry-run --alsologtostderr                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                           |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                       | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC |                     |
	|           | -p functional-502505                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                   |                   |         |         |                     |                     |
	| image     | functional-502505 image ls                               | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC | 03 Jul 24 04:30 UTC |
	| image     | functional-502505 image load --daemon                    | functional-502505 | jenkins | v1.33.1 | 03 Jul 24 04:30 UTC |                     |
	|           | gcr.io/google-containers/addon-resizer:functional-502505 |                   |         |         |                     |                     |
	|           | --alsologtostderr                                        |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 04:30:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 04:30:43.513038   19263 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:30:43.513195   19263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:43.513208   19263 out.go:304] Setting ErrFile to fd 2...
	I0703 04:30:43.513215   19263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:43.513414   19263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:30:43.514034   19263 out.go:298] Setting JSON to false
	I0703 04:30:43.515070   19263 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":787,"bootTime":1719980256,"procs":285,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 04:30:43.515135   19263 start.go:139] virtualization: kvm guest
	I0703 04:30:43.517290   19263 out.go:177] * [functional-502505] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 04:30:43.518637   19263 notify.go:220] Checking for updates...
	I0703 04:30:43.518664   19263 out.go:177]   - MINIKUBE_LOCATION=19184
	I0703 04:30:43.519943   19263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 04:30:43.521178   19263 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 04:30:43.522406   19263 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 04:30:43.523681   19263 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 04:30:43.525040   19263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 04:30:43.526817   19263 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 04:30:43.527224   19263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:43.527274   19263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:43.542481   19263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0703 04:30:43.542833   19263 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:43.543341   19263 main.go:141] libmachine: Using API Version  1
	I0703 04:30:43.543360   19263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:43.543757   19263 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:43.544015   19263 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:43.544260   19263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 04:30:43.544546   19263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:43.544582   19263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:43.559258   19263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0703 04:30:43.559682   19263 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:43.560185   19263 main.go:141] libmachine: Using API Version  1
	I0703 04:30:43.560204   19263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:43.560490   19263 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:43.560671   19263 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:43.593146   19263 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 04:30:43.594341   19263 start.go:297] selected driver: kvm2
	I0703 04:30:43.594356   19263 start.go:901] validating driver "kvm2" against &{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:30:43.594473   19263 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 04:30:43.595597   19263 cni.go:84] Creating CNI manager for ""
	I0703 04:30:43.595614   19263 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0703 04:30:43.595657   19263 start.go:340] cluster config:
	{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:30:43.597290   19263 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4fca3c48aa9ba       fffffc90d343c       5 seconds ago        Running             myfrontend                0                   1b62f8db77069       sp-pod
	82e5f2ae5bea8       82e4c8a736a4f       14 seconds ago       Running             echoserver                0                   32c4ed3dda52b       hello-node-6d85cfcfd8-cwf9k
	3f7350c99caad       56cc512116c8f       14 seconds ago       Exited              mount-munger              0                   c3a607b98ddc1       busybox-mount
	b90969fb0f122       82e4c8a736a4f       17 seconds ago       Running             echoserver                0                   b663a389fedbb       hello-node-connect-57b4589c47-7njdl
	0481ac4db61f9       cbb01a7bd410d       46 seconds ago       Running             coredns                   2                   37129e09f8ea0       coredns-7db6d8ff4d-fts6d
	2d75fb5db730d       53c535741fb44       46 seconds ago       Running             kube-proxy                2                   d4ce837288fa2       kube-proxy-spsjc
	4fa84086025c8       6e38f40d628db       46 seconds ago       Running             storage-provisioner       4                   44694fff18b49       storage-provisioner
	6ffa7f8d09cb6       56ce0fd9fb532       50 seconds ago       Running             kube-apiserver            0                   2906ff872c5fe       kube-apiserver-functional-502505
	df86e2c18a48b       7820c83aa1394       50 seconds ago       Running             kube-scheduler            2                   bd44e5af5bfaa       kube-scheduler-functional-502505
	b97699681e706       e874818b3caac       50 seconds ago       Running             kube-controller-manager   2                   90c2db6730e7b       kube-controller-manager-functional-502505
	09bfde035a632       3861cfcd7c04c       50 seconds ago       Running             etcd                      2                   838b26db56dee       etcd-functional-502505
	e10f91f31df27       6e38f40d628db       53 seconds ago       Exited              storage-provisioner       3                   44694fff18b49       storage-provisioner
	a6b795c693f2e       e874818b3caac       About a minute ago   Exited              kube-controller-manager   1                   90c2db6730e7b       kube-controller-manager-functional-502505
	8c920df2e33c1       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   838b26db56dee       etcd-functional-502505
	c8e41c6173772       7820c83aa1394       About a minute ago   Exited              kube-scheduler            1                   bd44e5af5bfaa       kube-scheduler-functional-502505
	11b5748a2e821       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   37129e09f8ea0       coredns-7db6d8ff4d-fts6d
	8ffcd519e3130       53c535741fb44       About a minute ago   Exited              kube-proxy                1                   d4ce837288fa2       kube-proxy-spsjc
	
	
	==> containerd <==
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.133912286Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.135757622Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df\", size \"70984068\" in 6.240902466s"
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.135831946Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c\""
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.141773130Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.144195488Z" level=info msg="CreateContainer within sandbox \"1b62f8db7706904cfbcd95f96cbb76cf1d1a8175002340b9efafefe6ca7fd6ec\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.146469735Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.175051855Z" level=info msg="CreateContainer within sandbox \"1b62f8db7706904cfbcd95f96cbb76cf1d1a8175002340b9efafefe6ca7fd6ec\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\""
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.175880096Z" level=info msg="StartContainer for \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\""
	Jul 03 04:30:42 functional-502505 containerd[3376]: time="2024-07-03T04:30:42.289807814Z" level=info msg="StartContainer for \"4fca3c48aa9ba87bb9bc9fb80fbeb7df27f94b46b716ffa816ca93108e93a50c\" returns successfully"
	Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.044244039Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.373937965Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-502505\""
	Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.381286200Z" level=info msg="ImageCreate event name:\"sha256:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jul 03 04:30:43 functional-502505 containerd[3376]: time="2024-07-03T04:30:43.382371336Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-502505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.596788326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-779776cb65-mnh76,Uid:e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c,Namespace:kubernetes-dashboard,Attempt:0,}"
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.616016342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-b5fc48f67-vktg7,Uid:ceb7d87e-e07a-4c85-b378-65b5ef7814a9,Namespace:kubernetes-dashboard,Attempt:0,}"
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925064993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925136555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925150819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 04:30:46 functional-502505 containerd[3376]: time="2024-07-03T04:30:46.925226826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.068844286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-779776cb65-mnh76,Uid:e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4801f71dffca7200da0678dd9d5fe2949693d55782526e5ad3578fed835669dc\""
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.101002773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.102176946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.104511065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.106380100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 04:30:47 functional-502505 containerd[3376]: time="2024-07-03T04:30:47.277157703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-b5fc48f67-vktg7,Uid:ceb7d87e-e07a-4c85-b378-65b5ef7814a9,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"a0de7d5672ea36a7ff1283f1f434ed9115282cd8c519494c032e3b051be02afb\""
	
	
	==> coredns [0481ac4db61f9d01c91a73599ce0c4e3bebdcd27c1d03c7799b0c5c360530d84] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46565 - 64295 "HINFO IN 8235943687636550445.7495420946416754712. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014720995s
	
	
	==> coredns [11b5748a2e821c10dc0c8d733cbbf50e6776a62dcdc3333fef860f3b5b959221] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58748 - 16425 "HINFO IN 7703097324571266106.1887883732849068066. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014464809s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-502505
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-502505
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e34d4fd348f73f0f8af294cc2737aeb8da39e8d
	                    minikube.k8s.io/name=functional-502505
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T04_28_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 04:28:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-502505
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 04:30:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 04:29:59 +0000   Wed, 03 Jul 2024 04:28:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 04:29:59 +0000   Wed, 03 Jul 2024 04:28:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 04:29:59 +0000   Wed, 03 Jul 2024 04:28:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 04:29:59 +0000   Wed, 03 Jul 2024 04:28:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    functional-502505
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9345ffc03e304bb79c8a4a46fd9708fb
	  System UUID:                9345ffc0-3e30-4bb7-9c8a-4a46fd9708fb
	  Boot ID:                    fb3954ad-56a4-4777-b32f-e12c79ee1fd8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.18
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-cwf9k                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  default                     hello-node-connect-57b4589c47-7njdl          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  default                     mysql-64454c8b5c-x9ns2                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    7s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 coredns-7db6d8ff4d-fts6d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m10s
	  kube-system                 etcd-functional-502505                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-apiserver-functional-502505             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-controller-manager-functional-502505    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kube-proxy-spsjc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-functional-502505             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-vktg7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-mnh76        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m8s                   kube-proxy       
	  Normal  Starting                 46s                    kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node functional-502505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node functional-502505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node functional-502505 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m24s                  kubelet          Node functional-502505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s                  kubelet          Node functional-502505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m24s                  kubelet          Node functional-502505 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m23s                  kubelet          Node functional-502505 status is now: NodeReady
	  Normal  RegisteredNode           2m10s                  node-controller  Node functional-502505 event: Registered Node functional-502505 in Controller
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node functional-502505 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node functional-502505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node functional-502505 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                    node-controller  Node functional-502505 event: Registered Node functional-502505 in Controller
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)      kubelet          Node functional-502505 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)      kubelet          Node functional-502505 status is now: NodeHasSufficientMemory
	  Normal  Starting                 51s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     51s (x7 over 51s)      kubelet          Node functional-502505 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                    node-controller  Node functional-502505 event: Registered Node functional-502505 in Controller
	
	
	==> dmesg <==
	[  +0.159765] systemd-fstab-generator[2002]: Ignoring "noauto" option for root device
	[  +0.329422] systemd-fstab-generator[2031]: Ignoring "noauto" option for root device
	[  +1.854123] systemd-fstab-generator[2189]: Ignoring "noauto" option for root device
	[  +5.816165] kauditd_printk_skb: 122 callbacks suppressed
	[Jul 3 04:29] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.598418] systemd-fstab-generator[2690]: Ignoring "noauto" option for root device
	[  +4.573552] kauditd_printk_skb: 36 callbacks suppressed
	[ +15.140776] systemd-fstab-generator[3003]: Ignoring "noauto" option for root device
	[ +12.483347] systemd-fstab-generator[3301]: Ignoring "noauto" option for root device
	[  +0.086376] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.070087] systemd-fstab-generator[3313]: Ignoring "noauto" option for root device
	[  +0.161981] systemd-fstab-generator[3327]: Ignoring "noauto" option for root device
	[  +0.137117] systemd-fstab-generator[3339]: Ignoring "noauto" option for root device
	[  +0.298061] systemd-fstab-generator[3368]: Ignoring "noauto" option for root device
	[  +1.943183] systemd-fstab-generator[3529]: Ignoring "noauto" option for root device
	[ +11.091048] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.368397] systemd-fstab-generator[3952]: Ignoring "noauto" option for root device
	[Jul 3 04:30] kauditd_printk_skb: 39 callbacks suppressed
	[ +14.969257] systemd-fstab-generator[4407]: Ignoring "noauto" option for root device
	[  +0.082234] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.123777] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.381685] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.478942] kauditd_printk_skb: 37 callbacks suppressed
	[  +7.629937] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.473772] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [09bfde035a6322616aa99ea4e7d3e6737f116467c03f7e48d7e9fe84a2ca512b] <==
	{"level":"info","ts":"2024-07-03T04:29:58.608978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 4"}
	{"level":"info","ts":"2024-07-03T04:29:58.614048Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:functional-502505 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T04:29:58.614223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:29:58.616172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-03T04:29:58.61856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:29:58.618834Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T04:29:58.618864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T04:29:58.622195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
	{"level":"warn","ts":"2024-07-03T04:30:40.358589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.919616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/hello-node-connect\" ","response":"range_response_count:1 size:655"}
	{"level":"info","ts":"2024-07-03T04:30:40.358996Z","caller":"traceutil/trace.go:171","msg":"trace[453369321] range","detail":"{range_begin:/registry/services/endpoints/default/hello-node-connect; range_end:; response_count:1; response_revision:737; }","duration":"116.377532ms","start":"2024-07-03T04:30:40.242599Z","end":"2024-07-03T04:30:40.358976Z","steps":["trace[453369321] 'agreement among raft nodes before linearized reading'  (duration: 115.694157ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:40.359296Z","caller":"traceutil/trace.go:171","msg":"trace[1858571883] transaction","detail":"{read_only:false; response_revision:737; number_of_response:1; }","duration":"119.227809ms","start":"2024-07-03T04:30:40.240055Z","end":"2024-07-03T04:30:40.359283Z","steps":["trace[1858571883] 'process raft request'  (duration: 108.665914ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:40.3597Z","caller":"traceutil/trace.go:171","msg":"trace[520494723] linearizableReadLoop","detail":"{readStateIndex:801; appliedIndex:800; }","duration":"111.728249ms","start":"2024-07-03T04:30:40.242625Z","end":"2024-07-03T04:30:40.354353Z","steps":["trace[520494723] 'read index received'  (duration: 105.980669ms)","trace[520494723] 'applied index is now lower than readState.Index'  (duration: 5.74522ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T04:30:45.059988Z","caller":"traceutil/trace.go:171","msg":"trace[34802432] transaction","detail":"{read_only:false; response_revision:780; number_of_response:1; }","duration":"106.467116ms","start":"2024-07-03T04:30:44.953503Z","end":"2024-07-03T04:30:45.059971Z","steps":["trace[34802432] 'process raft request'  (duration: 85.951244ms)","trace[34802432] 'compare'  (duration: 20.438802ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-03T04:30:45.060662Z","caller":"traceutil/trace.go:171","msg":"trace[257287578] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"100.568337ms","start":"2024-07-03T04:30:44.960085Z","end":"2024-07-03T04:30:45.060653Z","steps":["trace[257287578] 'process raft request'  (duration: 100.313228ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T04:30:45.064979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.297327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4584"}
	{"level":"info","ts":"2024-07-03T04:30:45.065046Z","caller":"traceutil/trace.go:171","msg":"trace[1604445107] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:783; }","duration":"101.396098ms","start":"2024-07-03T04:30:44.963637Z","end":"2024-07-03T04:30:45.065033Z","steps":["trace[1604445107] 'agreement among raft nodes before linearized reading'  (duration: 101.240555ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:45.065158Z","caller":"traceutil/trace.go:171","msg":"trace[633190272] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"105.019503ms","start":"2024-07-03T04:30:44.960133Z","end":"2024-07-03T04:30:45.065152Z","steps":["trace[633190272] 'process raft request'  (duration: 100.364237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-03T04:30:45.090032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.091343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-03T04:30:45.091945Z","caller":"traceutil/trace.go:171","msg":"trace[1414394451] range","detail":"{range_begin:/registry/rolebindings/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:0; response_revision:783; }","duration":"122.036209ms","start":"2024-07-03T04:30:44.969893Z","end":"2024-07-03T04:30:45.09193Z","steps":["trace[1414394451] 'agreement among raft nodes before linearized reading'  (duration: 119.47874ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:45.339947Z","caller":"traceutil/trace.go:171","msg":"trace[333853410] transaction","detail":"{read_only:false; response_revision:792; number_of_response:1; }","duration":"114.409227ms","start":"2024-07-03T04:30:45.225521Z","end":"2024-07-03T04:30:45.33993Z","steps":["trace[333853410] 'process raft request'  (duration: 107.04566ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:45.339935Z","caller":"traceutil/trace.go:171","msg":"trace[654767916] linearizableReadLoop","detail":"{readStateIndex:857; appliedIndex:856; }","duration":"102.913998ms","start":"2024-07-03T04:30:45.237Z","end":"2024-07-03T04:30:45.339914Z","steps":["trace[654767916] 'read index received'  (duration: 95.529893ms)","trace[654767916] 'applied index is now lower than readState.Index'  (duration: 7.383229ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-03T04:30:45.340147Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.127806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kubernetes-dashboard\" ","response":"range_response_count:1 size:897"}
	{"level":"info","ts":"2024-07-03T04:30:45.340182Z","caller":"traceutil/trace.go:171","msg":"trace[148790479] range","detail":"{range_begin:/registry/namespaces/kubernetes-dashboard; range_end:; response_count:1; response_revision:792; }","duration":"103.202894ms","start":"2024-07-03T04:30:45.23697Z","end":"2024-07-03T04:30:45.340173Z","steps":["trace[148790479] 'agreement among raft nodes before linearized reading'  (duration: 103.003527ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:45.341658Z","caller":"traceutil/trace.go:171","msg":"trace[199208681] transaction","detail":"{read_only:false; response_revision:794; number_of_response:1; }","duration":"103.462782ms","start":"2024-07-03T04:30:45.238186Z","end":"2024-07-03T04:30:45.341649Z","steps":["trace[199208681] 'process raft request'  (duration: 103.426321ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-03T04:30:45.341952Z","caller":"traceutil/trace.go:171","msg":"trace[372107416] transaction","detail":"{read_only:false; response_revision:793; number_of_response:1; }","duration":"104.799118ms","start":"2024-07-03T04:30:45.237135Z","end":"2024-07-03T04:30:45.341934Z","steps":["trace[372107416] 'process raft request'  (duration: 104.415622ms)"],"step_count":1}
	
	
	==> etcd [8c920df2e33c1a890b3c38828cc235ecd658e4df72447433c5e4733ba69c3c67] <==
	{"level":"info","ts":"2024-07-03T04:29:05.234011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-07-03T04:29:06.748596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-03T04:29:06.748838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-03T04:29:06.749078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgPreVoteResp from bb39151d8411994b at term 2"}
	{"level":"info","ts":"2024-07-03T04:29:06.749281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became candidate at term 3"}
	{"level":"info","ts":"2024-07-03T04:29:06.749485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgVoteResp from bb39151d8411994b at term 3"}
	{"level":"info","ts":"2024-07-03T04:29:06.749589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became leader at term 3"}
	{"level":"info","ts":"2024-07-03T04:29:06.74971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 3"}
	{"level":"info","ts":"2024-07-03T04:29:06.756057Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:functional-502505 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T04:29:06.756075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:29:06.756515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T04:29:06.756548Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T04:29:06.756106Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T04:29:06.759482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
	{"level":"info","ts":"2024-07-03T04:29:06.760668Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-03T04:29:39.878455Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-03T04:29:39.878573Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-502505","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	{"level":"warn","ts":"2024-07-03T04:29:39.878657Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T04:29:39.878736Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T04:29:39.896249Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-03T04:29:39.896296Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-03T04:29:39.896527Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bb39151d8411994b","current-leader-member-id":"bb39151d8411994b"}
	{"level":"info","ts":"2024-07-03T04:29:39.900187Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-07-03T04:29:39.900365Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-07-03T04:29:39.900455Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-502505","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"]}
	
	
	==> kernel <==
	 04:30:47 up 3 min,  0 users,  load average: 2.08, 0.72, 0.26
	Linux functional-502505 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6ffa7f8d09cb67a67f310c0d98c2f76308ceb96177f628f863713b7f9761a577] <==
	I0703 04:29:59.968836       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 04:29:59.969045       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 04:29:59.969521       1 aggregator.go:165] initial CRD sync complete...
	I0703 04:29:59.969662       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 04:29:59.969775       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 04:29:59.969811       1 cache.go:39] Caches are synced for autoregister controller
	I0703 04:29:59.975094       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0703 04:29:59.999650       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0703 04:30:00.048446       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 04:30:00.861228       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0703 04:30:01.912367       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0703 04:30:01.928728       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0703 04:30:01.994769       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0703 04:30:02.034885       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0703 04:30:02.049073       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0703 04:30:18.675192       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 04:30:21.900719       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.80.80"}
	I0703 04:30:21.914628       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0703 04:30:26.899120       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0703 04:30:26.993843       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.138.228"}
	I0703 04:30:28.050899       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.185.11"}
	I0703 04:30:40.373347       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.207.19"}
	I0703 04:30:44.814188       1 controller.go:615] quota admission added evaluator for: namespaces
	I0703 04:30:45.657648       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.128.190"}
	I0703 04:30:45.787065       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.37.201"}
	
	
	==> kube-controller-manager [a6b795c693f2efdbeff597fe344cacc689f8ef8214a1e30f4d22237ef34105ff] <==
	I0703 04:29:20.567323       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0703 04:29:20.567566       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0703 04:29:20.568216       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0703 04:29:20.570687       1 shared_informer.go:320] Caches are synced for ephemeral
	I0703 04:29:20.572884       1 shared_informer.go:320] Caches are synced for HPA
	I0703 04:29:20.575125       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0703 04:29:20.578135       1 shared_informer.go:320] Caches are synced for expand
	I0703 04:29:20.579401       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0703 04:29:20.580702       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0703 04:29:20.606491       1 shared_informer.go:320] Caches are synced for taint
	I0703 04:29:20.606625       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0703 04:29:20.607319       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-502505"
	I0703 04:29:20.607648       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0703 04:29:20.639498       1 shared_informer.go:320] Caches are synced for persistent volume
	I0703 04:29:20.684524       1 shared_informer.go:320] Caches are synced for PV protection
	I0703 04:29:20.686110       1 shared_informer.go:320] Caches are synced for attach detach
	I0703 04:29:20.723557       1 shared_informer.go:320] Caches are synced for service account
	I0703 04:29:20.750973       1 shared_informer.go:320] Caches are synced for namespace
	I0703 04:29:20.773114       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 04:29:20.777540       1 shared_informer.go:320] Caches are synced for endpoint
	I0703 04:29:20.792142       1 shared_informer.go:320] Caches are synced for resource quota
	I0703 04:29:20.820203       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0703 04:29:21.189195       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 04:29:21.260747       1 shared_informer.go:320] Caches are synced for garbage collector
	I0703 04:29:21.260926       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [b97699681e7066784af8d12bee3b5135464edf4557bf197ad6639d50ebcca6bb] <==
	I0703 04:30:45.063491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="174.137026ms"
	E0703 04:30:45.063539       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.087174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="164.352318ms"
	E0703 04:30:45.087385       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.118462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="54.125867ms"
	E0703 04:30:45.118508       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.156709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="69.003525ms"
	E0703 04:30:45.156780       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.160329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="41.793015ms"
	E0703 04:30:45.160376       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.188459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="28.001816ms"
	E0703 04:30:45.188720       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.197485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="40.673502ms"
	E0703 04:30:45.197526       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.200699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.835631ms"
	E0703 04:30:45.200741       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.205167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="7.617178ms"
	E0703 04:30:45.205188       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0703 04:30:45.378062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="158.99929ms"
	I0703 04:30:45.396031       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="164.108076ms"
	I0703 04:30:45.467604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="71.401883ms"
	I0703 04:30:45.467680       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="31.869µs"
	I0703 04:30:45.501891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="123.768047ms"
	I0703 04:30:45.501966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="26.705µs"
	I0703 04:30:45.534496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="91.19µs"
	
	
	==> kube-proxy [2d75fb5db730d72e85ad104eadb239da997af1a3483c2b533c8bf3b7f954ec3f] <==
	I0703 04:30:01.327080       1 server_linux.go:69] "Using iptables proxy"
	I0703 04:30:01.336490       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0703 04:30:01.375629       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 04:30:01.375678       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 04:30:01.375695       1 server_linux.go:165] "Using iptables Proxier"
	I0703 04:30:01.378698       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 04:30:01.379130       1 server.go:872] "Version info" version="v1.30.2"
	I0703 04:30:01.379161       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 04:30:01.380338       1 config.go:192] "Starting service config controller"
	I0703 04:30:01.380384       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 04:30:01.380452       1 config.go:101] "Starting endpoint slice config controller"
	I0703 04:30:01.380457       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 04:30:01.381487       1 config.go:319] "Starting node config controller"
	I0703 04:30:01.381513       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 04:30:01.481230       1 shared_informer.go:320] Caches are synced for service config
	I0703 04:30:01.481299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 04:30:01.481673       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [8ffcd519e31304c6e70748464e3fd58095226f9a7d59ee4f64c59119c83aadb7] <==
	I0703 04:28:53.252670       1 server_linux.go:69] "Using iptables proxy"
	E0703 04:28:53.260324       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
	E0703 04:28:54.274633       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
	E0703 04:28:56.283899       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
	E0703 04:29:00.960148       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-502505\": dial tcp 192.168.39.7:8441: connect: connection refused"
	I0703 04:29:09.700955       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0703 04:29:09.735290       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 04:29:09.735341       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 04:29:09.735358       1 server_linux.go:165] "Using iptables Proxier"
	I0703 04:29:09.737976       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 04:29:09.738375       1 server.go:872] "Version info" version="v1.30.2"
	I0703 04:29:09.738723       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 04:29:09.740151       1 config.go:192] "Starting service config controller"
	I0703 04:29:09.740188       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 04:29:09.740215       1 config.go:101] "Starting endpoint slice config controller"
	I0703 04:29:09.740240       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 04:29:09.740877       1 config.go:319] "Starting node config controller"
	I0703 04:29:09.740907       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 04:29:09.840379       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0703 04:29:09.840519       1 shared_informer.go:320] Caches are synced for service config
	I0703 04:29:09.840994       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c8e41c6173772c406e61d6ff4ae97f8c22aba3e1de1f9439658992549b987208] <==
	I0703 04:29:05.992588       1 serving.go:380] Generated self-signed cert in-memory
	I0703 04:29:08.104138       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 04:29:08.104290       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 04:29:08.110008       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 04:29:08.110266       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0703 04:29:08.110355       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0703 04:29:08.110530       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 04:29:08.112500       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 04:29:08.112600       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 04:29:08.112624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0703 04:29:08.112772       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0703 04:29:08.210569       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0703 04:29:08.213166       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0703 04:29:08.213556       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 04:29:39.960944       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0703 04:29:39.961081       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0703 04:29:39.961264       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 04:29:39.961326       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0703 04:29:39.961348       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	E0703 04:29:39.961343       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [df86e2c18a48bbc09e7082b9546dc32b019922e5eab63a3c4d24ad60adcbeca4] <==
	I0703 04:29:57.985808       1 serving.go:380] Generated self-signed cert in-memory
	W0703 04:29:59.926879       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 04:29:59.926920       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 04:29:59.927006       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 04:29:59.927013       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 04:29:59.983862       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 04:29:59.985881       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 04:29:59.989177       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 04:29:59.992029       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 04:30:00.000251       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 04:29:59.992129       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 04:30:00.100954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 03 04:30:30 functional-502505 kubelet[3959]: I0703 04:30:30.856292    3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-57b4589c47-7njdl" podStartSLOduration=2.183319736 podStartE2EDuration="4.856273103s" podCreationTimestamp="2024-07-03 04:30:26 +0000 UTC" firstStartedPulling="2024-07-03 04:30:27.499671808 +0000 UTC m=+30.960368057" lastFinishedPulling="2024-07-03 04:30:30.172625164 +0000 UTC m=+33.633321424" observedRunningTime="2024-07-03 04:30:30.855470986 +0000 UTC m=+34.316167253" watchObservedRunningTime="2024-07-03 04:30:30.856273103 +0000 UTC m=+34.316969360"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.001936    3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-6d85cfcfd8-cwf9k" podStartSLOduration=4.024547563 podStartE2EDuration="8.001918572s" podCreationTimestamp="2024-07-03 04:30:27 +0000 UTC" firstStartedPulling="2024-07-03 04:30:28.770833641 +0000 UTC m=+32.231529894" lastFinishedPulling="2024-07-03 04:30:32.748204644 +0000 UTC m=+36.208900903" observedRunningTime="2024-07-03 04:30:33.894551076 +0000 UTC m=+37.355247343" watchObservedRunningTime="2024-07-03 04:30:35.001918572 +0000 UTC m=+38.462614840"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118114    3959 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume\") pod \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\" (UID: \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\") "
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118163    3959 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgnt\" (UniqueName: \"kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt\") pod \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\" (UID: \"45cbb956-6396-45ea-be8b-0b0b06dcc5f8\") "
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.118359    3959 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume" (OuterVolumeSpecName: "test-volume") pod "45cbb956-6396-45ea-be8b-0b0b06dcc5f8" (UID: "45cbb956-6396-45ea-be8b-0b0b06dcc5f8"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.120675    3959 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt" (OuterVolumeSpecName: "kube-api-access-qlgnt") pod "45cbb956-6396-45ea-be8b-0b0b06dcc5f8" (UID: "45cbb956-6396-45ea-be8b-0b0b06dcc5f8"). InnerVolumeSpecName "kube-api-access-qlgnt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.219273    3959 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qlgnt\" (UniqueName: \"kubernetes.io/projected/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-kube-api-access-qlgnt\") on node \"functional-502505\" DevicePath \"\""
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.219605    3959 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/45cbb956-6396-45ea-be8b-0b0b06dcc5f8-test-volume\") on node \"functional-502505\" DevicePath \"\""
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.385557    3959 topology_manager.go:215] "Topology Admit Handler" podUID="6364f737-43b9-4e1f-a857-b6edb68c8b98" podNamespace="default" podName="sp-pod"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: E0703 04:30:35.385740    3959 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cbb956-6396-45ea-be8b-0b0b06dcc5f8" containerName="mount-munger"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.385796    3959 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cbb956-6396-45ea-be8b-0b0b06dcc5f8" containerName="mount-munger"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.522379    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd\" (UniqueName: \"kubernetes.io/host-path/6364f737-43b9-4e1f-a857-b6edb68c8b98-pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd\") pod \"sp-pod\" (UID: \"6364f737-43b9-4e1f-a857-b6edb68c8b98\") " pod="default/sp-pod"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.522814    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mln4c\" (UniqueName: \"kubernetes.io/projected/6364f737-43b9-4e1f-a857-b6edb68c8b98-kube-api-access-mln4c\") pod \"sp-pod\" (UID: \"6364f737-43b9-4e1f-a857-b6edb68c8b98\") " pod="default/sp-pod"
	Jul 03 04:30:35 functional-502505 kubelet[3959]: I0703 04:30:35.875937    3959 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a607b98ddc1d5d56db7846a3f538166a3d22ed1baffb81bdbdf98628e7ff8d"
	Jul 03 04:30:40 functional-502505 kubelet[3959]: I0703 04:30:40.505881    3959 topology_manager.go:215] "Topology Admit Handler" podUID="9677e95a-f370-4c57-8eb2-e7e44dd91562" podNamespace="default" podName="mysql-64454c8b5c-x9ns2"
	Jul 03 04:30:40 functional-502505 kubelet[3959]: I0703 04:30:40.663918    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6kkg\" (UniqueName: \"kubernetes.io/projected/9677e95a-f370-4c57-8eb2-e7e44dd91562-kube-api-access-z6kkg\") pod \"mysql-64454c8b5c-x9ns2\" (UID: \"9677e95a-f370-4c57-8eb2-e7e44dd91562\") " pod="default/mysql-64454c8b5c-x9ns2"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.389571    3959 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.141531456 podStartE2EDuration="10.389553893s" podCreationTimestamp="2024-07-03 04:30:35 +0000 UTC" firstStartedPulling="2024-07-03 04:30:35.89260554 +0000 UTC m=+39.353301789" lastFinishedPulling="2024-07-03 04:30:42.140627965 +0000 UTC m=+45.601324226" observedRunningTime="2024-07-03 04:30:42.916535786 +0000 UTC m=+46.377232055" watchObservedRunningTime="2024-07-03 04:30:45.389553893 +0000 UTC m=+48.850250161"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.389933    3959 topology_manager.go:215] "Topology Admit Handler" podUID="e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-mnh76"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.392577    3959 topology_manager.go:215] "Topology Admit Handler" podUID="ceb7d87e-e07a-4c85-b378-65b5ef7814a9" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-vktg7"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: W0703 04:30:45.422592    3959 reflector.go:547] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-502505" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-502505' and this object
	Jul 03 04:30:45 functional-502505 kubelet[3959]: E0703 04:30:45.422699    3959 reflector.go:150] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-502505" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-502505' and this object
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498637    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpsgl\" (UniqueName: \"kubernetes.io/projected/e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c-kube-api-access-bpsgl\") pod \"kubernetes-dashboard-779776cb65-mnh76\" (UID: \"e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-mnh76"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498697    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ceb7d87e-e07a-4c85-b378-65b5ef7814a9-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-vktg7\" (UID: \"ceb7d87e-e07a-4c85-b378-65b5ef7814a9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vktg7"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498722    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjkc\" (UniqueName: \"kubernetes.io/projected/ceb7d87e-e07a-4c85-b378-65b5ef7814a9-kube-api-access-kbjkc\") pod \"dashboard-metrics-scraper-b5fc48f67-vktg7\" (UID: \"ceb7d87e-e07a-4c85-b378-65b5ef7814a9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-vktg7"
	Jul 03 04:30:45 functional-502505 kubelet[3959]: I0703 04:30:45.498753    3959 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-mnh76\" (UID: \"e6761bd4-0a9c-40f0-bc6e-b6455a5a7b9c\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-mnh76"
	
	
	==> storage-provisioner [4fa84086025c891289da45d161c96d31e29204a01d8499ff745db7ecf20b92aa] <==
	I0703 04:30:01.264214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 04:30:01.276687       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 04:30:01.276753       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0703 04:30:18.682191       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0703 04:30:18.682585       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb!
	I0703 04:30:18.683492       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a84bf370-0499-476c-9405-a83d581135e6", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb became leader
	I0703 04:30:18.783841       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-502505_49d754df-956c-4afb-a7e7-b102534e84bb!
	I0703 04:30:32.535550       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0703 04:30:32.536594       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    fc4694f2-480d-42f7-95de-3178fadbbf36 383 0 2024-07-03 04:28:38 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-03 04:28:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b0108cbd-122c-40dd-9f09-62f07633b3cd 705 0 2024-07-03 04:30:32 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-03 04:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-03 04:30:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0703 04:30:32.537663       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b0108cbd-122c-40dd-9f09-62f07633b3cd", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0703 04:30:32.537967       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd" provisioned
	I0703 04:30:32.538012       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0703 04:30:32.538024       1 volume_store.go:212] Trying to save persistentvolume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd"
	I0703 04:30:32.601273       1 volume_store.go:219] persistentvolume "pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd" saved
	I0703 04:30:32.604596       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b0108cbd-122c-40dd-9f09-62f07633b3cd", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b0108cbd-122c-40dd-9f09-62f07633b3cd
	
	
	==> storage-provisioner [e10f91f31df27081e9585ebfaaa185dd7c123f94fa5fd567a5c3e4fb6e0253bb] <==
	I0703 04:29:54.444222       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0703 04:29:54.445969       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-502505 -n functional-502505
helpers_test.go:261: (dbg) Run:  kubectl --context functional-502505 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76: exit status 1 (77.874083ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-502505/192.168.39.7
	Start Time:       Wed, 03 Jul 2024 04:30:27 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://3f7350c99caad76816d94685e180db1d2ceaaade213649297229c537f5f1937b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 03 Jul 2024 04:30:32 +0000
	      Finished:     Wed, 03 Jul 2024 04:30:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qlgnt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qlgnt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  21s   default-scheduler  Successfully assigned default/busybox-mount to functional-502505
	  Normal  Pulling    20s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     16s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.485s (3.906s including waiting). Image size: 2395207 bytes.
	  Normal  Created    16s   kubelet            Created container mount-munger
	  Normal  Started    16s   kubelet            Started container mount-munger
	
	
	Name:             mysql-64454c8b5c-x9ns2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-502505/192.168.39.7
	Start Time:       Wed, 03 Jul 2024 04:30:40 +0000
	Labels:           app=mysql
	                  pod-template-hash=64454c8b5c
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-64454c8b5c
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6kkg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z6kkg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/mysql-64454c8b5c-x9ns2 to functional-502505
	  Normal  Pulling    7s    kubelet            Pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-vktg7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-mnh76" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-502505 describe pod busybox-mount mysql-64454c8b5c-x9ns2 dashboard-metrics-scraper-b5fc48f67-vktg7 kubernetes-dashboard-779776cb65-mnh76: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.34s)

                                                
                                    

Test pass (289/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 32.15
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.2/json-events 14.14
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.6
18 TestDownloadOnly/v1.30.2/DeleteAll 0.12
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 121.69
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 218.52
29 TestAddons/parallel/Registry 16.54
30 TestAddons/parallel/Ingress 21.17
31 TestAddons/parallel/InspektorGadget 10.78
32 TestAddons/parallel/MetricsServer 6.76
33 TestAddons/parallel/HelmTiller 11.49
35 TestAddons/parallel/CSI 40.68
36 TestAddons/parallel/Headlamp 12.14
37 TestAddons/parallel/CloudSpanner 5.7
38 TestAddons/parallel/LocalPath 13.31
39 TestAddons/parallel/NvidiaDevicePlugin 6.53
40 TestAddons/parallel/Yakd 5.01
41 TestAddons/parallel/Volcano 34.4
44 TestAddons/serial/GCPAuth/Namespaces 0.13
45 TestAddons/StoppedEnableDisable 92.67
46 TestCertOptions 71.47
47 TestCertExpiration 323.91
49 TestForceSystemdFlag 97.23
50 TestForceSystemdEnv 45.74
52 TestKVMDriverInstallOrUpdate 5.08
56 TestErrorSpam/setup 43.95
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.7
59 TestErrorSpam/pause 1.5
60 TestErrorSpam/unpause 1.52
61 TestErrorSpam/stop 4.88
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 58.96
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 44.45
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
73 TestFunctional/serial/CacheCmd/cache/add_local 2.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 43.9
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 4.14
87 TestFunctional/parallel/ConfigCmd 0.3
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.82
95 TestFunctional/parallel/ServiceCmdConnect 10.44
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 45.97
99 TestFunctional/parallel/SSHCmd 0.41
100 TestFunctional/parallel/CpCmd 1.32
101 TestFunctional/parallel/MySQL 28.01
102 TestFunctional/parallel/FileSync 0.21
103 TestFunctional/parallel/CertSync 1.23
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.69
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
113 TestFunctional/parallel/ProfileCmd/profile_list 0.27
120 TestFunctional/parallel/MountCmd/any-port 10.52
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
125 TestFunctional/parallel/Version/short 0.18
126 TestFunctional/parallel/Version/components 0.71
127 TestFunctional/parallel/ServiceCmd/DeployApp 11.25
128 TestFunctional/parallel/MountCmd/specific-port 1.57
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
133 TestFunctional/parallel/ImageCommands/ImageBuild 4.44
134 TestFunctional/parallel/ImageCommands/Setup 2.4
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
136 TestFunctional/parallel/ServiceCmd/List 0.28
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.84
140 TestFunctional/parallel/ServiceCmd/Format 0.49
141 TestFunctional/parallel/ServiceCmd/URL 0.33
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.37
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.35
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.95
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.06
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.97
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
151 TestFunctional/delete_addon-resizer_images 0.06
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 220.59
158 TestMultiControlPlane/serial/DeployApp 5.72
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 47.54
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.27
164 TestMultiControlPlane/serial/StopSecondaryNode 92.21
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
166 TestMultiControlPlane/serial/RestartSecondaryNode 41.17
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.52
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 429.11
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.72
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.35
171 TestMultiControlPlane/serial/StopCluster 274.57
172 TestMultiControlPlane/serial/RestartCluster 148.21
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 67.98
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 55.23
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.65
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.57
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 96.58
211 TestMountStart/serial/StartWithMountFirst 28.35
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 28.33
214 TestMountStart/serial/VerifyMountSecond 0.34
215 TestMountStart/serial/DeleteFirst 0.87
216 TestMountStart/serial/VerifyMountPostDelete 0.35
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 24.08
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 102.73
223 TestMultiNode/serial/DeployApp2Nodes 4.85
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 43.16
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.2
228 TestMultiNode/serial/CopyFile 6.86
229 TestMultiNode/serial/StopNode 2.11
230 TestMultiNode/serial/StartAfterStop 24.83
231 TestMultiNode/serial/RestartKeepsNodes 291.78
232 TestMultiNode/serial/DeleteNode 2.27
233 TestMultiNode/serial/StopMultiNode 183.15
234 TestMultiNode/serial/RestartMultiNode 82.13
235 TestMultiNode/serial/ValidateNameConflict 42.61
240 TestPreload 313.57
242 TestScheduledStopUnix 114.2
246 TestRunningBinaryUpgrade 181.54
248 TestKubernetesUpgrade 178.25
257 TestNetworkPlugins/group/false 2.73
268 TestStoppedBinaryUpgrade/Setup 2.66
269 TestStoppedBinaryUpgrade/Upgrade 164.64
271 TestPause/serial/Start 126.02
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.87
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
275 TestNoKubernetes/serial/StartWithK8s 48.14
276 TestPause/serial/SecondStartNoReconfiguration 57.05
277 TestNoKubernetes/serial/StartWithStopK8s 39.88
278 TestNetworkPlugins/group/auto/Start 100.49
279 TestPause/serial/Pause 0.81
280 TestPause/serial/VerifyStatus 0.29
281 TestPause/serial/Unpause 1.02
282 TestPause/serial/PauseAgain 1.55
283 TestPause/serial/DeletePaused 0.89
284 TestPause/serial/VerifyDeletedResources 17.97
285 TestNetworkPlugins/group/kindnet/Start 65.65
286 TestNoKubernetes/serial/Start 51.49
287 TestNetworkPlugins/group/calico/Start 129.69
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
289 TestNoKubernetes/serial/ProfileList 1.08
290 TestNoKubernetes/serial/Stop 1.34
291 TestNoKubernetes/serial/StartNoArgs 43.99
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
295 TestNetworkPlugins/group/auto/KubeletFlags 0.23
296 TestNetworkPlugins/group/auto/NetCatPod 12.34
297 TestNetworkPlugins/group/kindnet/DNS 0.2
298 TestNetworkPlugins/group/kindnet/Localhost 0.34
299 TestNetworkPlugins/group/kindnet/HairPin 0.19
300 TestNetworkPlugins/group/auto/DNS 0.18
301 TestNetworkPlugins/group/auto/Localhost 0.18
302 TestNetworkPlugins/group/auto/HairPin 0.15
303 TestNetworkPlugins/group/custom-flannel/Start 84.09
304 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
305 TestNetworkPlugins/group/enable-default-cni/Start 96.06
306 TestNetworkPlugins/group/flannel/Start 134.02
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.2
309 TestNetworkPlugins/group/calico/NetCatPod 10.21
310 TestNetworkPlugins/group/calico/DNS 0.16
311 TestNetworkPlugins/group/calico/Localhost 0.16
312 TestNetworkPlugins/group/calico/HairPin 0.14
313 TestNetworkPlugins/group/bridge/Start 77.05
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.42
316 TestNetworkPlugins/group/custom-flannel/DNS 0.17
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
325 TestStartStop/group/old-k8s-version/serial/FirstStart 170.49
327 TestStartStop/group/no-preload/serial/FirstStart 131
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
330 TestNetworkPlugins/group/flannel/NetCatPod 11.91
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
332 TestNetworkPlugins/group/bridge/NetCatPod 10.36
333 TestNetworkPlugins/group/bridge/DNS 0.29
334 TestNetworkPlugins/group/bridge/Localhost 0.12
335 TestNetworkPlugins/group/bridge/HairPin 0.13
336 TestNetworkPlugins/group/flannel/DNS 0.16
337 TestNetworkPlugins/group/flannel/Localhost 0.15
338 TestNetworkPlugins/group/flannel/HairPin 0.15
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 105.88
342 TestStartStop/group/newest-cni/serial/FirstStart 85.88
343 TestStartStop/group/no-preload/serial/DeployApp 9.31
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
347 TestStartStop/group/newest-cni/serial/Stop 2.33
348 TestStartStop/group/no-preload/serial/Stop 91.75
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
350 TestStartStop/group/newest-cni/serial/SecondStart 31.75
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
352 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.46
356 TestStartStop/group/old-k8s-version/serial/Stop 91.78
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
360 TestStartStop/group/newest-cni/serial/Pause 2.21
362 TestStartStop/group/embed-certs/serial/FirstStart 58.53
363 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
364 TestStartStop/group/no-preload/serial/SecondStart 317.56
365 TestStartStop/group/embed-certs/serial/DeployApp 10.28
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
367 TestStartStop/group/embed-certs/serial/Stop 91.63
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 319.62
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
371 TestStartStop/group/old-k8s-version/serial/SecondStart 450.28
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/embed-certs/serial/SecondStart 297.55
374 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
377 TestStartStop/group/no-preload/serial/Pause 2.6
378 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
381 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
385 TestStartStop/group/embed-certs/serial/Pause 2.42
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
389 TestStartStop/group/old-k8s-version/serial/Pause 2.33
x
+
TestDownloadOnly/v1.20.0/json-events (32.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-472178 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-472178 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (32.14710864s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (32.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-472178
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-472178: exit status 85 (54.540681ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-472178 | jenkins | v1.33.1 | 03 Jul 24 04:19 UTC |          |
	|         | -p download-only-472178        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 04:19:37
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 04:19:37.692474   10856 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:19:37.692575   10856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:19:37.692583   10856 out.go:304] Setting ErrFile to fd 2...
	I0703 04:19:37.692587   10856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:19:37.692782   10856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	W0703 04:19:37.692898   10856 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19184-3680/.minikube/config/config.json: open /home/jenkins/minikube-integration/19184-3680/.minikube/config/config.json: no such file or directory
	I0703 04:19:37.693422   10856 out.go:298] Setting JSON to true
	I0703 04:19:37.694234   10856 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":122,"bootTime":1719980256,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 04:19:37.694287   10856 start.go:139] virtualization: kvm guest
	I0703 04:19:37.696708   10856 out.go:97] [download-only-472178] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0703 04:19:37.696798   10856 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19184-3680/.minikube/cache/preloaded-tarball: no such file or directory
	I0703 04:19:37.696834   10856 notify.go:220] Checking for updates...
	I0703 04:19:37.698100   10856 out.go:169] MINIKUBE_LOCATION=19184
	I0703 04:19:37.699669   10856 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 04:19:37.700888   10856 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 04:19:37.702162   10856 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 04:19:37.703400   10856 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 04:19:37.705795   10856 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 04:19:37.706009   10856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 04:19:37.804962   10856 out.go:97] Using the kvm2 driver based on user configuration
	I0703 04:19:37.804999   10856 start.go:297] selected driver: kvm2
	I0703 04:19:37.805009   10856 start.go:901] validating driver "kvm2" against <nil>
	I0703 04:19:37.805359   10856 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 04:19:37.805472   10856 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19184-3680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 04:19:37.819611   10856 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 04:19:37.819650   10856 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 04:19:37.820139   10856 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0703 04:19:37.820311   10856 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 04:19:37.820384   10856 cni.go:84] Creating CNI manager for ""
	I0703 04:19:37.820398   10856 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0703 04:19:37.820407   10856 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 04:19:37.820471   10856 start.go:340] cluster config:
	{Name:download-only-472178 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-472178 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:19:37.820691   10856 iso.go:125] acquiring lock: {Name:mkf0d872d521e896a15b41926cb00e9c68bb4018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 04:19:37.822402   10856 out.go:97] Downloading VM boot image ...
	I0703 04:19:37.822437   10856 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19184-3680/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 04:19:48.239742   10856 out.go:97] Starting "download-only-472178" primary control-plane node in "download-only-472178" cluster
	I0703 04:19:48.239767   10856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0703 04:19:48.354297   10856 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0703 04:19:48.354330   10856 cache.go:56] Caching tarball of preloaded images
	I0703 04:19:48.354480   10856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0703 04:19:48.356247   10856 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0703 04:19:48.356264   10856 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0703 04:19:48.468890   10856 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19184-3680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0703 04:20:03.233212   10856 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0703 04:20:03.234091   10856 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19184-3680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0703 04:20:04.136047   10856 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0703 04:20:04.136365   10856 profile.go:143] Saving config to /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/download-only-472178/config.json ...
	I0703 04:20:04.136396   10856 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/download-only-472178/config.json: {Name:mkf9c3a5b1a386f39bcaad05946cf183433572d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 04:20:04.136568   10856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0703 04:20:04.136765   10856 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19184-3680/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-472178 host does not exist
	  To start a cluster, run: "minikube start -p download-only-472178"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-472178
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (14.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-409563 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-409563 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (14.137038018s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (14.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-409563
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-409563: exit status 85 (598.57878ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-472178 | jenkins | v1.33.1 | 03 Jul 24 04:19 UTC |                     |
	|         | -p download-only-472178        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jul 24 04:20 UTC | 03 Jul 24 04:20 UTC |
	| delete  | -p download-only-472178        | download-only-472178 | jenkins | v1.33.1 | 03 Jul 24 04:20 UTC | 03 Jul 24 04:20 UTC |
	| start   | -o=json --download-only        | download-only-409563 | jenkins | v1.33.1 | 03 Jul 24 04:20 UTC |                     |
	|         | -p download-only-409563        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 04:20:10
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 04:20:10.151310   11144 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:20:10.151418   11144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:20:10.151427   11144 out.go:304] Setting ErrFile to fd 2...
	I0703 04:20:10.151432   11144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:20:10.151658   11144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:20:10.152243   11144 out.go:298] Setting JSON to true
	I0703 04:20:10.153064   11144 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":154,"bootTime":1719980256,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 04:20:10.153123   11144 start.go:139] virtualization: kvm guest
	I0703 04:20:10.155182   11144 out.go:97] [download-only-409563] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 04:20:10.155303   11144 notify.go:220] Checking for updates...
	I0703 04:20:10.156962   11144 out.go:169] MINIKUBE_LOCATION=19184
	I0703 04:20:10.158295   11144 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 04:20:10.159534   11144 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 04:20:10.160768   11144 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 04:20:10.162129   11144 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 04:20:10.164500   11144 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 04:20:10.164720   11144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 04:20:10.195467   11144 out.go:97] Using the kvm2 driver based on user configuration
	I0703 04:20:10.195510   11144 start.go:297] selected driver: kvm2
	I0703 04:20:10.195529   11144 start.go:901] validating driver "kvm2" against <nil>
	I0703 04:20:10.195870   11144 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 04:20:10.195954   11144 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19184-3680/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 04:20:10.210193   11144 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 04:20:10.210241   11144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 04:20:10.210722   11144 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0703 04:20:10.210862   11144 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 04:20:10.210937   11144 cni.go:84] Creating CNI manager for ""
	I0703 04:20:10.210952   11144 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0703 04:20:10.210960   11144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0703 04:20:10.211013   11144 start.go:340] cluster config:
	{Name:download-only-409563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-409563 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:20:10.211104   11144 iso.go:125] acquiring lock: {Name:mkf0d872d521e896a15b41926cb00e9c68bb4018 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 04:20:10.212712   11144 out.go:97] Starting "download-only-409563" primary control-plane node in "download-only-409563" cluster
	I0703 04:20:10.212751   11144 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0703 04:20:10.319996   11144 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
	I0703 04:20:10.320031   11144 cache.go:56] Caching tarball of preloaded images
	I0703 04:20:10.320174   11144 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0703 04:20:10.322170   11144 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0703 04:20:10.322200   11144 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 ...
	I0703 04:20:10.439575   11144 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:a69e65264a76d4a498a2c6efe8e151d6 -> /home/jenkins/minikube-integration/19184-3680/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-409563 host does not exist
	  To start a cluster, run: "minikube start -p download-only-409563"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-409563
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-626701 --alsologtostderr --binary-mirror http://127.0.0.1:43261 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-626701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-626701
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (121.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-993646 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-993646 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m0.648295904s)
helpers_test.go:175: Cleaning up "offline-containerd-993646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-993646
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-993646: (1.04006387s)
--- PASS: TestOffline (121.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-832832
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-832832: exit status 85 (50.234503ms)

                                                
                                                
-- stdout --
	* Profile "addons-832832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-832832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-832832
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-832832: exit status 85 (50.111975ms)

                                                
                                                
-- stdout --
	* Profile "addons-832832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-832832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (218.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-832832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-832832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.5210357s)
--- PASS: TestAddons/Setup (218.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 34.182262ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9v594" [03007da1-09b8-467e-b211-8a01f0044006] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010143099s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-skxc6" [56a403f7-b6a9-44f6-8b04-b563db031223] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005862608s
addons_test.go:342: (dbg) Run:  kubectl --context addons-832832 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-832832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-832832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.710093807s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 ip
2024/07/03 04:24:20 [DEBUG] GET http://192.168.39.179:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-832832 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-832832 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-832832 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ac883948-46e3-4623-852b-2d166a236f84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ac883948-46e3-4623-852b-2d166a236f84] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003334744s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-832832 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.179
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-832832 addons disable ingress-dns --alsologtostderr -v=1: (2.089626282s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-832832 addons disable ingress --alsologtostderr -v=1: (7.897835771s)
--- PASS: TestAddons/parallel/Ingress (21.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nmvs7" [1db136de-e9c8-4b21-b3bb-9f35d399a21b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004256329s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-832832
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-832832: (5.776491908s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.107301ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-fpmwr" [c4ef1a69-f60b-4a64-9931-b9feb07ec22b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004826817s
addons_test.go:417: (dbg) Run:  kubectl --context addons-832832 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 34.126813ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-mtmpx" [8fd80c12-0515-40b4-aa60-2363e63d97c0] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007309345s
addons_test.go:475: (dbg) Run:  kubectl --context addons-832832 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-832832 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.819967391s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.579013ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-832832 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-832832 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5e5f5eeb-e53f-42c7-aa1b-20544bbfca94] Pending
helpers_test.go:344: "task-pv-pod" [5e5f5eeb-e53f-42c7-aa1b-20544bbfca94] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5e5f5eeb-e53f-42c7-aa1b-20544bbfca94] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003560252s
addons_test.go:586: (dbg) Run:  kubectl --context addons-832832 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-832832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-832832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-832832 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-832832 delete pod task-pv-pod: (1.255707535s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-832832 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-832832 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-832832 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [041a3d36-0b76-4220-8e25-28dc8dadc512] Pending
helpers_test.go:344: "task-pv-pod-restore" [041a3d36-0b76-4220-8e25-28dc8dadc512] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [041a3d36-0b76-4220-8e25-28dc8dadc512] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006957871s
addons_test.go:628: (dbg) Run:  kubectl --context addons-832832 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-832832 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-832832 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-832832 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.800218574s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-832832 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-832832 --alsologtostderr -v=1: (1.133225581s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-g4lzc" [4ddec8f5-d1d2-4ca4-9450-e06d87d10494] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-g4lzc" [4ddec8f5-d1d2-4ca4-9450-e06d87d10494] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004669838s
--- PASS: TestAddons/parallel/Headlamp (12.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-fnbfg" [7c885675-0904-4a55-bd9d-6765935fc8d5] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003947618s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-832832
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-832832 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-832832 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [27f114ab-0109-4f01-85ed-018d49aa6e7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [27f114ab-0109-4f01-85ed-018d49aa6e7e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [27f114ab-0109-4f01-85ed-018d49aa6e7e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004275794s
addons_test.go:992: (dbg) Run:  kubectl --context addons-832832 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 ssh "cat /opt/local-path-provisioner/pvc-11aecce5-b76f-4609-8cba-5edee9eadaba_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-832832 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-832832 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xmktx" [735a0d97-26ce-4ec5-9baf-0a290dd61a43] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00598689s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-832832
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-65ftg" [c56a74f4-dfc2-49a6-a540-99a847f83ad9] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009146003s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (34.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 4.740257ms
addons_test.go:889: volcano-scheduler stabilized in 5.487793ms
addons_test.go:897: volcano-admission stabilized in 8.657946ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-gg665" [74119b51-2b09-493f-8c69-0b603123e6e2] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.005840183s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-mlgqf" [c154ffff-ad61-48e9-8afc-13708a7675eb] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004303283s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-krx79" [d2210126-eefc-49ed-9feb-f1f08e76fe82] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.004880757s
addons_test.go:924: (dbg) Run:  kubectl --context addons-832832 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-832832 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-832832 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a813b2b3-3ff2-4e95-ba3e-ba06ef84bfe3] Pending
helpers_test.go:344: "test-job-nginx-0" [a813b2b3-3ff2-4e95-ba3e-ba06ef84bfe3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a813b2b3-3ff2-4e95-ba3e-ba06ef84bfe3] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 8.004130079s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-832832 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-832832 addons disable volcano --alsologtostderr -v=1: (11.057239226s)
--- PASS: TestAddons/parallel/Volcano (34.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-832832 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-832832 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-832832
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-832832: (1m32.405222915s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-832832
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-832832
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-832832
--- PASS: TestAddons/StoppedEnableDisable (92.67s)

                                                
                                    
x
+
TestCertOptions (71.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-235210 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-235210 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m10.171568816s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-235210 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-235210 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-235210 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-235210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-235210
--- PASS: TestCertOptions (71.47s)

                                                
                                    
x
+
TestCertExpiration (323.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-754097 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0703 05:18:47.491460   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 05:19:04.445067   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-754097 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m53.606500179s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-754097 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-754097 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (29.528002028s)
helpers_test.go:175: Cleaning up "cert-expiration-754097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-754097
--- PASS: TestCertExpiration (323.91s)

                                                
                                    
x
+
TestForceSystemdFlag (97.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-099672 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-099672 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m36.242798069s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-099672 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-099672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-099672
--- PASS: TestForceSystemdFlag (97.23s)

                                                
                                    
x
+
TestForceSystemdEnv (45.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-401578 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-401578 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (44.579249919s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-401578 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-401578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-401578
--- PASS: TestForceSystemdEnv (45.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0703 05:24:04.445605   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (5.08s)

                                                
                                    
x
+
TestErrorSpam/setup (43.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-095937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-095937 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-095937 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-095937 --driver=kvm2  --container-runtime=containerd: (43.953755518s)
--- PASS: TestErrorSpam/setup (43.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (4.88s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop: (1.429707256s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop: (1.908417202s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-095937 --log_dir /tmp/nospam-095937 stop: (1.537396849s)
--- PASS: TestErrorSpam/stop (4.88s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19184-3680/.minikube/files/etc/test/nested/copy/10844/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-502505 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (58.964190529s)
--- PASS: TestFunctional/serial/StartWithProxy (58.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --alsologtostderr -v=8
E0703 04:29:04.444880   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.450636   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.460930   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.481196   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.521524   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.601867   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:04.762296   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:05.083099   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:05.723635   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:07.004148   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:09.564676   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:14.685781   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:29:24.926485   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-502505 --alsologtostderr -v=8: (44.44870516s)
functional_test.go:659: soft start took 44.449392021s for "functional-502505" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-502505 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:3.1: (1.082325539s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:3.3: (1.2596134s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 cache add registry.k8s.io/pause:latest: (1.057890858s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-502505 /tmp/TestFunctionalserialCacheCmdcacheadd_local1273385098/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache add minikube-local-cache-test:functional-502505
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 cache add minikube-local-cache-test:functional-502505: (1.943991029s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache delete minikube-local-cache-test:functional-502505
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-502505
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (200.606617ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 cache reload: (1.015814832s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 kubectl -- --context functional-502505 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-502505 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0703 04:29:45.407389   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-502505 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.899477837s)
functional_test.go:757: restart took 43.899569416s for "functional-502505" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-502505 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 logs: (1.389066922s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 logs --file /tmp/TestFunctionalserialLogsFileCmd3463923085/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 logs --file /tmp/TestFunctionalserialLogsFileCmd3463923085/001/logs.txt: (1.415037684s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-502505 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-502505
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-502505: exit status 115 (271.709573ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.7:32523 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-502505 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 config get cpus: exit status 14 (44.617126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 config get cpus: exit status 14 (46.085626ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-502505 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (129.018544ms)

                                                
                                                
-- stdout --
	* [functional-502505] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 04:30:43.383701   19236 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:30:43.384157   19236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:43.384204   19236 out.go:304] Setting ErrFile to fd 2...
	I0703 04:30:43.384221   19236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:43.384732   19236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:30:43.385678   19236 out.go:298] Setting JSON to false
	I0703 04:30:43.386723   19236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":787,"bootTime":1719980256,"procs":283,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 04:30:43.386785   19236 start.go:139] virtualization: kvm guest
	I0703 04:30:43.388492   19236 out.go:177] * [functional-502505] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 04:30:43.390267   19236 notify.go:220] Checking for updates...
	I0703 04:30:43.390285   19236 out.go:177]   - MINIKUBE_LOCATION=19184
	I0703 04:30:43.391589   19236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 04:30:43.392894   19236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 04:30:43.394060   19236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 04:30:43.395225   19236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 04:30:43.396663   19236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 04:30:43.398657   19236 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 04:30:43.399120   19236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:43.399188   19236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:43.414858   19236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41289
	I0703 04:30:43.415240   19236 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:43.415789   19236 main.go:141] libmachine: Using API Version  1
	I0703 04:30:43.415804   19236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:43.416138   19236 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:43.416334   19236 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:43.416574   19236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 04:30:43.416918   19236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:43.416960   19236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:43.431200   19236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0703 04:30:43.431602   19236 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:43.432049   19236 main.go:141] libmachine: Using API Version  1
	I0703 04:30:43.432072   19236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:43.432429   19236 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:43.432618   19236 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:43.465470   19236 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 04:30:43.466935   19236 start.go:297] selected driver: kvm2
	I0703 04:30:43.466958   19236 start.go:901] validating driver "kvm2" against &{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:30:43.467080   19236 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 04:30:43.469274   19236 out.go:177] 
	W0703 04:30:43.470608   19236 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0703 04:30:43.472113   19236 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-502505 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-502505 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (142.049027ms)

                                                
                                                
-- stdout --
	* [functional-502505] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 04:30:26.210032   17525 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:30:26.210304   17525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:26.210313   17525 out.go:304] Setting ErrFile to fd 2...
	I0703 04:30:26.210317   17525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:30:26.210611   17525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:30:26.211156   17525 out.go:298] Setting JSON to false
	I0703 04:30:26.212141   17525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":770,"bootTime":1719980256,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 04:30:26.212201   17525 start.go:139] virtualization: kvm guest
	I0703 04:30:26.214606   17525 out.go:177] * [functional-502505] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0703 04:30:26.216235   17525 notify.go:220] Checking for updates...
	I0703 04:30:26.216254   17525 out.go:177]   - MINIKUBE_LOCATION=19184
	I0703 04:30:26.218175   17525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 04:30:26.220149   17525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 04:30:26.221539   17525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 04:30:26.222943   17525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 04:30:26.224331   17525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 04:30:26.225849   17525 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 04:30:26.226216   17525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:26.226295   17525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:26.242723   17525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33209
	I0703 04:30:26.243388   17525 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:26.243972   17525 main.go:141] libmachine: Using API Version  1
	I0703 04:30:26.243991   17525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:26.244337   17525 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:26.244520   17525 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:26.244770   17525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 04:30:26.245183   17525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:30:26.245219   17525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:30:26.259687   17525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0703 04:30:26.260095   17525 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:30:26.260656   17525 main.go:141] libmachine: Using API Version  1
	I0703 04:30:26.260681   17525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:30:26.261072   17525 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:30:26.261300   17525 main.go:141] libmachine: (functional-502505) Calling .DriverName
	I0703 04:30:26.298649   17525 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0703 04:30:26.300028   17525 start.go:297] selected driver: kvm2
	I0703 04:30:26.300042   17525 start.go:901] validating driver "kvm2" against &{Name:functional-502505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-502505 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 04:30:26.300161   17525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 04:30:26.302405   17525 out.go:177] 
	W0703 04:30:26.303872   17525 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0703 04:30:26.305227   17525 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-502505 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-502505 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-7njdl" [87fb29fd-4e5f-482e-a9db-23faa5fe7e79] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-7njdl" [87fb29fd-4e5f-482e-a9db-23faa5fe7e79] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004647714s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.7:32687
functional_test.go:1671: http://192.168.39.7:32687: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-7njdl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.7:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.7:32687
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e2da6074-f4ce-4e61-8092-e734bae60c0f] Running
E0703 04:30:26.368584   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004366041s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-502505 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-502505 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-502505 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-502505 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-502505 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6364f737-43b9-4e1f-a857-b6edb68c8b98] Pending
helpers_test.go:344: "sp-pod" [6364f737-43b9-4e1f-a857-b6edb68c8b98] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6364f737-43b9-4e1f-a857-b6edb68c8b98] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005515125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-502505 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-502505 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-502505 delete -f testdata/storage-provisioner/pod.yaml: (1.461180478s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-502505 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd3ebb9d-adc9-4533-a382-861b5e87efe6] Pending
helpers_test.go:344: "sp-pod" [dd3ebb9d-adc9-4533-a382-861b5e87efe6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd3ebb9d-adc9-4533-a382-861b5e87efe6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003966229s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-502505 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh -n functional-502505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cp functional-502505:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd55629243/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh -n functional-502505 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh -n functional-502505 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-502505 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-x9ns2" [9677e95a-f370-4c57-8eb2-e7e44dd91562] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-x9ns2" [9677e95a-f370-4c57-8eb2-e7e44dd91562] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.251551844s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;": exit status 1 (356.017111ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;": exit status 1 (177.203911ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;": exit status 1 (202.522421ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-502505 exec mysql-64454c8b5c-x9ns2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10844/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /etc/test/nested/copy/10844/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10844.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /etc/ssl/certs/10844.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10844.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /usr/share/ca-certificates/10844.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/108442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /etc/ssl/certs/108442.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/108442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /usr/share/ca-certificates/108442.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-502505 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "sudo systemctl is-active docker": exit status 1 (227.263212ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "sudo systemctl is-active crio": exit status 1 (225.077195ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "227.409169ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "44.29041ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdany-port4244740807/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719981026310021569" to /tmp/TestFunctionalparallelMountCmdany-port4244740807/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719981026310021569" to /tmp/TestFunctionalparallelMountCmdany-port4244740807/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719981026310021569" to /tmp/TestFunctionalparallelMountCmdany-port4244740807/001/test-1719981026310021569
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.701669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  3 04:30 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  3 04:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  3 04:30 test-1719981026310021569
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh cat /mount-9p/test-1719981026310021569
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-502505 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [45cbb956-6396-45ea-be8b-0b0b06dcc5f8] Pending
helpers_test.go:344: "busybox-mount" [45cbb956-6396-45ea-be8b-0b0b06dcc5f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [45cbb956-6396-45ea-be8b-0b0b06dcc5f8] Running
helpers_test.go:344: "busybox-mount" [45cbb956-6396-45ea-be8b-0b0b06dcc5f8] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [45cbb956-6396-45ea-be8b-0b0b06dcc5f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003682323s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-502505 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdany-port4244740807/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "228.401567ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.710873ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-502505 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-502505 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-cwf9k" [3ffb8900-92f6-4b58-8255-fda23defa617] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-cwf9k" [3ffb8900-92f6-4b58-8255-fda23defa617] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.006734451s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdspecific-port4031701413/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (186.467557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdspecific-port4031701413/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "sudo umount -f /mount-9p": exit status 1 (218.010209ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-502505 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdspecific-port4031701413/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-502505 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-502505
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-502505
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-502505 image ls --format short --alsologtostderr:
I0703 04:30:59.630415   19955 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:59.630526   19955 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.630537   19955 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:59.630543   19955 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.630841   19955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:59.631590   19955 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.631744   19955 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.632322   19955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.632368   19955 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.646832   19955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46253
I0703 04:30:59.647242   19955 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.647820   19955 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.647843   19955 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.648210   19955 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.648396   19955 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:59.650432   19955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.650465   19955 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.664447   19955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
I0703 04:30:59.664857   19955 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.665586   19955 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.665610   19955 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.665952   19955 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.666170   19955 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:59.666366   19955 ssh_runner.go:195] Run: systemctl --version
I0703 04:30:59.666387   19955 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:59.669862   19955 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:59.670378   19955 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:59.670411   19955 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:59.670647   19955 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:59.670834   19955 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:59.670960   19955 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:59.671095   19955 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:59.784977   19955 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 04:30:59.918475   19955 main.go:141] libmachine: Making call to close driver server
I0703 04:30:59.918487   19955 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:59.918766   19955 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:30:59.918813   19955 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:59.918822   19955 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:59.918835   19955 main.go:141] libmachine: Making call to close driver server
I0703 04:30:59.918842   19955 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:59.919087   19955 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:30:59.919138   19955 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:59.919158   19955 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-502505 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:fffffc | 71MB   |
| gcr.io/google-containers/addon-resizer      | functional-502505  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.2            | sha256:53c535 | 29MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240513-cd2ac642 | sha256:ac1c61 | 28.2MB |
| docker.io/library/minikube-local-cache-test | functional-502505  | sha256:1f88eb | 989B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-apiserver              | v1.30.2            | sha256:56ce0f | 32.8MB |
| registry.k8s.io/kube-controller-manager     | v1.30.2            | sha256:e87481 | 31.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.2            | sha256:7820c8 | 19.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-502505 image ls --format table --alsologtostderr:
I0703 04:30:59.971360   20006 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:59.971647   20006 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.971658   20006 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:59.971664   20006 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.971991   20006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:59.972622   20006 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.972751   20006 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.973178   20006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.973220   20006 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.987588   20006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
I0703 04:30:59.988010   20006 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.988809   20006 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.988839   20006 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.989179   20006 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.989349   20006 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:59.991260   20006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.991302   20006 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:31:00.005155   20006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
I0703 04:31:00.005663   20006 main.go:141] libmachine: () Calling .GetVersion
I0703 04:31:00.006101   20006 main.go:141] libmachine: Using API Version  1
I0703 04:31:00.006120   20006 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:31:00.006556   20006 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:31:00.006759   20006 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:31:00.006997   20006 ssh_runner.go:195] Run: systemctl --version
I0703 04:31:00.007024   20006 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:31:00.010356   20006 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.010652   20006 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:31:00.010683   20006 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.010825   20006 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:31:00.011021   20006 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:31:00.011190   20006 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:31:00.011337   20006 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:31:00.125497   20006 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 04:31:00.241355   20006 main.go:141] libmachine: Making call to close driver server
I0703 04:31:00.241376   20006 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:00.241661   20006 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:00.241676   20006 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:31:00.241685   20006 main.go:141] libmachine: Making call to close driver server
I0703 04:31:00.241692   20006 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:00.241916   20006 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:00.241950   20006 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-502505 image ls --format json --alsologtostderr:
[{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"31138657"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-502505"],"size":"10823156"},{"
id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:1f88ebc65254dae909abe4612d4f9028ca35c67e118a828bdb55e3ceccfacdde","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-502505"],"size":"989"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:53c535741fb
446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"29034457"},{"id":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"19328121"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker
.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"28194900"},{"id":"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df"],"repoTags":["docker.io/library/nginx:latest"],"size":"70984068"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"32768601"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa
4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-502505 image ls --format json --alsologtostderr:
I0703 04:30:59.971360   20000 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:59.971611   20000 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.971699   20000 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:59.971733   20000 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.972253   20000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:59.972784   20000 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.972880   20000 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.973228   20000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.973267   20000 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.987640   20000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39215
I0703 04:30:59.988036   20000 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.988570   20000 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.988596   20000 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.988918   20000 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.989096   20000 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:59.991077   20000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.991120   20000 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:31:00.006209   20000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42575
I0703 04:31:00.006594   20000 main.go:141] libmachine: () Calling .GetVersion
I0703 04:31:00.007042   20000 main.go:141] libmachine: Using API Version  1
I0703 04:31:00.007067   20000 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:31:00.007436   20000 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:31:00.007665   20000 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:31:00.007885   20000 ssh_runner.go:195] Run: systemctl --version
I0703 04:31:00.007925   20000 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:31:00.010636   20000 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.011175   20000 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:31:00.011200   20000 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.011470   20000 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:31:00.011631   20000 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:31:00.011788   20000 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:31:00.011893   20000 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:31:00.099380   20000 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 04:31:00.151412   20000 main.go:141] libmachine: Making call to close driver server
I0703 04:31:00.151426   20000 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:00.151697   20000 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:00.151723   20000 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:31:00.151732   20000 main.go:141] libmachine: Making call to close driver server
I0703 04:31:00.151741   20000 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:00.151765   20000 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:31:00.152050   20000 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:31:00.152054   20000 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:00.152081   20000 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-502505 image ls --format yaml --alsologtostderr:
- id: sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
repoTags:
- docker.io/library/nginx:latest
size: "70984068"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-502505
size: "10823156"
- id: sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "19328121"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "28194900"
- id: sha256:1f88ebc65254dae909abe4612d4f9028ca35c67e118a828bdb55e3ceccfacdde
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-502505
size: "989"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "31138657"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "32768601"
- id: sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "29034457"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-502505 image ls --format yaml --alsologtostderr:
I0703 04:30:59.626758   19956 out.go:291] Setting OutFile to fd 1 ...
I0703 04:30:59.626877   19956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.626887   19956 out.go:304] Setting ErrFile to fd 2...
I0703 04:30:59.626893   19956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:30:59.627134   19956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:30:59.627715   19956 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.627839   19956 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:30:59.628330   19956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.628381   19956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.643433   19956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
I0703 04:30:59.644013   19956 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.644647   19956 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.644671   19956 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.645072   19956 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.645240   19956 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:30:59.647213   19956 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:30:59.647246   19956 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:30:59.663988   19956 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
I0703 04:30:59.664405   19956 main.go:141] libmachine: () Calling .GetVersion
I0703 04:30:59.664975   19956 main.go:141] libmachine: Using API Version  1
I0703 04:30:59.665000   19956 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:30:59.665350   19956 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:30:59.665530   19956 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:30:59.665703   19956 ssh_runner.go:195] Run: systemctl --version
I0703 04:30:59.665739   19956 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:30:59.669429   19956 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:59.670006   19956 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:30:59.670047   19956 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:30:59.670097   19956 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:30:59.670270   19956 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:30:59.670431   19956 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:30:59.670562   19956 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:30:59.786482   19956 ssh_runner.go:195] Run: sudo crictl images --output json
I0703 04:30:59.922476   19956 main.go:141] libmachine: Making call to close driver server
I0703 04:30:59.922501   19956 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:59.922745   19956 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
I0703 04:30:59.922745   19956 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:59.922773   19956 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:59.922786   19956 main.go:141] libmachine: Making call to close driver server
I0703 04:30:59.922797   19956 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:30:59.922996   19956 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:30:59.923010   19956 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:30:59.923083   19956 main.go:141] libmachine: (functional-502505) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh pgrep buildkitd: exit status 1 (195.716892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image build -t localhost/my-image:functional-502505 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image build -t localhost/my-image:functional-502505 testdata/build --alsologtostderr: (4.017950911s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-502505 image build -t localhost/my-image:functional-502505 testdata/build --alsologtostderr:
I0703 04:31:00.396827   20074 out.go:291] Setting OutFile to fd 1 ...
I0703 04:31:00.396972   20074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:31:00.396983   20074 out.go:304] Setting ErrFile to fd 2...
I0703 04:31:00.396989   20074 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 04:31:00.397281   20074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
I0703 04:31:00.398033   20074 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:31:00.398619   20074 config.go:182] Loaded profile config "functional-502505": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 04:31:00.399002   20074 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:31:00.399040   20074 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:31:00.413552   20074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
I0703 04:31:00.414029   20074 main.go:141] libmachine: () Calling .GetVersion
I0703 04:31:00.414652   20074 main.go:141] libmachine: Using API Version  1
I0703 04:31:00.414674   20074 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:31:00.415054   20074 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:31:00.415270   20074 main.go:141] libmachine: (functional-502505) Calling .GetState
I0703 04:31:00.417419   20074 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0703 04:31:00.417469   20074 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 04:31:00.432349   20074 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
I0703 04:31:00.432742   20074 main.go:141] libmachine: () Calling .GetVersion
I0703 04:31:00.433278   20074 main.go:141] libmachine: Using API Version  1
I0703 04:31:00.433304   20074 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 04:31:00.433665   20074 main.go:141] libmachine: () Calling .GetMachineName
I0703 04:31:00.433878   20074 main.go:141] libmachine: (functional-502505) Calling .DriverName
I0703 04:31:00.434105   20074 ssh_runner.go:195] Run: systemctl --version
I0703 04:31:00.434131   20074 main.go:141] libmachine: (functional-502505) Calling .GetSSHHostname
I0703 04:31:00.436785   20074 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.437245   20074 main.go:141] libmachine: (functional-502505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:3d:1d", ip: ""} in network mk-functional-502505: {Iface:virbr1 ExpiryTime:2024-07-03 05:27:57 +0000 UTC Type:0 Mac:52:54:00:5b:3d:1d Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:functional-502505 Clientid:01:52:54:00:5b:3d:1d}
I0703 04:31:00.437283   20074 main.go:141] libmachine: (functional-502505) DBG | domain functional-502505 has defined IP address 192.168.39.7 and MAC address 52:54:00:5b:3d:1d in network mk-functional-502505
I0703 04:31:00.437429   20074 main.go:141] libmachine: (functional-502505) Calling .GetSSHPort
I0703 04:31:00.437590   20074 main.go:141] libmachine: (functional-502505) Calling .GetSSHKeyPath
I0703 04:31:00.437755   20074 main.go:141] libmachine: (functional-502505) Calling .GetSSHUsername
I0703 04:31:00.437920   20074 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/functional-502505/id_rsa Username:docker}
I0703 04:31:00.519040   20074 build_images.go:161] Building image from path: /tmp/build.2375020667.tar
I0703 04:31:00.519118   20074 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0703 04:31:00.536342   20074 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2375020667.tar
I0703 04:31:00.544769   20074 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2375020667.tar: stat -c "%s %y" /var/lib/minikube/build/build.2375020667.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2375020667.tar': No such file or directory
I0703 04:31:00.544804   20074 ssh_runner.go:362] scp /tmp/build.2375020667.tar --> /var/lib/minikube/build/build.2375020667.tar (3072 bytes)
I0703 04:31:00.574711   20074 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2375020667
I0703 04:31:00.589506   20074 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2375020667 -xf /var/lib/minikube/build/build.2375020667.tar
I0703 04:31:00.607538   20074 containerd.go:394] Building image: /var/lib/minikube/build/build.2375020667
I0703 04:31:00.607612   20074 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2375020667 --local dockerfile=/var/lib/minikube/build/build.2375020667 --output type=image,name=localhost/my-image:functional-502505
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context:
#3 transferring context: 2B done
#3 DONE 0.2s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:c705f885486eb4070c8ea4dbbad92f97fec5187d38624ecf292401296c99507d
#8 exporting manifest sha256:c705f885486eb4070c8ea4dbbad92f97fec5187d38624ecf292401296c99507d 0.0s done
#8 exporting config sha256:a5454f1a0ba30ec16dd7a7c311a80919f5ba107f82903b6831790984e7e94356 0.0s done
#8 naming to localhost/my-image:functional-502505 done
#8 DONE 0.2s
I0703 04:31:04.335659   20074 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2375020667 --local dockerfile=/var/lib/minikube/build/build.2375020667 --output type=image,name=localhost/my-image:functional-502505: (3.728016675s)
I0703 04:31:04.335752   20074 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2375020667
I0703 04:31:04.348517   20074 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2375020667.tar
I0703 04:31:04.366886   20074 build_images.go:217] Built localhost/my-image:functional-502505 from /tmp/build.2375020667.tar
I0703 04:31:04.366922   20074 build_images.go:133] succeeded building to: functional-502505
I0703 04:31:04.366929   20074 build_images.go:134] failed building to: 
I0703 04:31:04.366954   20074 main.go:141] libmachine: Making call to close driver server
I0703 04:31:04.366971   20074 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:04.367263   20074 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:04.367282   20074 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 04:31:04.367292   20074 main.go:141] libmachine: Making call to close driver server
I0703 04:31:04.367300   20074 main.go:141] libmachine: (functional-502505) Calling .Close
I0703 04:31:04.367548   20074 main.go:141] libmachine: Successfully made call to close driver server
I0703 04:31:04.367565   20074 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.376951359s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-502505
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T" /mount1: exit status 1 (255.941328ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-502505 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-502505 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2072258018/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service list -o json
functional_test.go:1490: Took "289.661704ms" to run "out/minikube-linux-amd64 -p functional-502505 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.7:30848
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr: (4.604270672s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.7:30848
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr: (3.074109285s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.357137219s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-502505
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image load --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr: (4.365305085s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image save gcr.io/google-containers/addon-resizer:functional-502505 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image save gcr.io/google-containers/addon-resizer:functional-502505 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.062432708s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image rm gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.685277998s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-502505
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-502505 image save --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-502505 image save --daemon gcr.io/google-containers/addon-resizer:functional-502505 --alsologtostderr: (1.113750147s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-502505
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-502505
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-502505
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-502505
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (220.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-141516 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0703 04:31:48.288961   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:34:04.448220   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:34:32.129444   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-141516 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m39.938060206s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (220.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-141516 -- rollout status deployment/busybox: (3.599515842s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-bzbxr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-ktw7s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-mj55n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-bzbxr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-ktw7s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-mj55n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-bzbxr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-ktw7s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-mj55n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-bzbxr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-bzbxr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-ktw7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-ktw7s -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-mj55n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-141516 -- exec busybox-fc5497c4f-mj55n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-141516 -v=7 --alsologtostderr
E0703 04:35:26.340806   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.346104   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.356350   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.376608   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.416884   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.497587   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.657699   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:26.978865   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:27.619408   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:28.899591   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:31.460659   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:36.581341   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:35:46.821793   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-141516 -v=7 --alsologtostderr: (46.750738292s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-141516 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp testdata/cp-test.txt ha-141516:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1856660599/001/cp-test_ha-141516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516:/home/docker/cp-test.txt ha-141516-m02:/home/docker/cp-test_ha-141516_ha-141516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test_ha-141516_ha-141516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516:/home/docker/cp-test.txt ha-141516-m03:/home/docker/cp-test_ha-141516_ha-141516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test_ha-141516_ha-141516-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516:/home/docker/cp-test.txt ha-141516-m04:/home/docker/cp-test_ha-141516_ha-141516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test_ha-141516_ha-141516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp testdata/cp-test.txt ha-141516-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1856660599/001/cp-test_ha-141516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m02:/home/docker/cp-test.txt ha-141516:/home/docker/cp-test_ha-141516-m02_ha-141516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test_ha-141516-m02_ha-141516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m02:/home/docker/cp-test.txt ha-141516-m03:/home/docker/cp-test_ha-141516-m02_ha-141516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test_ha-141516-m02_ha-141516-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m02:/home/docker/cp-test.txt ha-141516-m04:/home/docker/cp-test_ha-141516-m02_ha-141516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test_ha-141516-m02_ha-141516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp testdata/cp-test.txt ha-141516-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1856660599/001/cp-test_ha-141516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m03:/home/docker/cp-test.txt ha-141516:/home/docker/cp-test_ha-141516-m03_ha-141516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test_ha-141516-m03_ha-141516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m03:/home/docker/cp-test.txt ha-141516-m02:/home/docker/cp-test_ha-141516-m03_ha-141516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test_ha-141516-m03_ha-141516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m03:/home/docker/cp-test.txt ha-141516-m04:/home/docker/cp-test_ha-141516-m03_ha-141516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test_ha-141516-m03_ha-141516-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp testdata/cp-test.txt ha-141516-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1856660599/001/cp-test_ha-141516-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m04:/home/docker/cp-test.txt ha-141516:/home/docker/cp-test_ha-141516-m04_ha-141516.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516 "sudo cat /home/docker/cp-test_ha-141516-m04_ha-141516.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m04:/home/docker/cp-test.txt ha-141516-m02:/home/docker/cp-test_ha-141516-m04_ha-141516-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m02 "sudo cat /home/docker/cp-test_ha-141516-m04_ha-141516-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 cp ha-141516-m04:/home/docker/cp-test.txt ha-141516-m03:/home/docker/cp-test_ha-141516-m04_ha-141516-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 ssh -n ha-141516-m03 "sudo cat /home/docker/cp-test_ha-141516-m04_ha-141516-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 node stop m02 -v=7 --alsologtostderr
E0703 04:36:07.302190   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:36:48.262488   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-141516 node stop m02 -v=7 --alsologtostderr: (1m31.596044939s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr: exit status 7 (611.911572ms)

                                                
                                                
-- stdout --
	ha-141516
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-141516-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-141516-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-141516-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 04:37:33.062617   24723 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:37:33.062718   24723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:37:33.062729   24723 out.go:304] Setting ErrFile to fd 2...
	I0703 04:37:33.062734   24723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:37:33.062950   24723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:37:33.063154   24723 out.go:298] Setting JSON to false
	I0703 04:37:33.063181   24723 mustload.go:65] Loading cluster: ha-141516
	I0703 04:37:33.063289   24723 notify.go:220] Checking for updates...
	I0703 04:37:33.063620   24723 config.go:182] Loaded profile config "ha-141516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 04:37:33.063639   24723 status.go:255] checking status of ha-141516 ...
	I0703 04:37:33.064077   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.064124   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.079621   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0703 04:37:33.080037   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.080640   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.080659   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.081094   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.081323   24723 main.go:141] libmachine: (ha-141516) Calling .GetState
	I0703 04:37:33.082901   24723 status.go:330] ha-141516 host status = "Running" (err=<nil>)
	I0703 04:37:33.082914   24723 host.go:66] Checking if "ha-141516" exists ...
	I0703 04:37:33.083209   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.083257   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.097457   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41155
	I0703 04:37:33.097794   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.098247   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.098274   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.098551   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.098738   24723 main.go:141] libmachine: (ha-141516) Calling .GetIP
	I0703 04:37:33.101286   24723 main.go:141] libmachine: (ha-141516) DBG | domain ha-141516 has defined MAC address 52:54:00:47:fb:d7 in network mk-ha-141516
	I0703 04:37:33.101700   24723 main.go:141] libmachine: (ha-141516) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:fb:d7", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:31:27 +0000 UTC Type:0 Mac:52:54:00:47:fb:d7 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-141516 Clientid:01:52:54:00:47:fb:d7}
	I0703 04:37:33.101729   24723 main.go:141] libmachine: (ha-141516) DBG | domain ha-141516 has defined IP address 192.168.39.83 and MAC address 52:54:00:47:fb:d7 in network mk-ha-141516
	I0703 04:37:33.101868   24723 host.go:66] Checking if "ha-141516" exists ...
	I0703 04:37:33.102150   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.102182   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.117086   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0703 04:37:33.117467   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.117895   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.117917   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.118238   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.118425   24723 main.go:141] libmachine: (ha-141516) Calling .DriverName
	I0703 04:37:33.118609   24723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 04:37:33.118649   24723 main.go:141] libmachine: (ha-141516) Calling .GetSSHHostname
	I0703 04:37:33.121256   24723 main.go:141] libmachine: (ha-141516) DBG | domain ha-141516 has defined MAC address 52:54:00:47:fb:d7 in network mk-ha-141516
	I0703 04:37:33.121720   24723 main.go:141] libmachine: (ha-141516) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:fb:d7", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:31:27 +0000 UTC Type:0 Mac:52:54:00:47:fb:d7 Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-141516 Clientid:01:52:54:00:47:fb:d7}
	I0703 04:37:33.121743   24723 main.go:141] libmachine: (ha-141516) DBG | domain ha-141516 has defined IP address 192.168.39.83 and MAC address 52:54:00:47:fb:d7 in network mk-ha-141516
	I0703 04:37:33.121896   24723 main.go:141] libmachine: (ha-141516) Calling .GetSSHPort
	I0703 04:37:33.122060   24723 main.go:141] libmachine: (ha-141516) Calling .GetSSHKeyPath
	I0703 04:37:33.122230   24723 main.go:141] libmachine: (ha-141516) Calling .GetSSHUsername
	I0703 04:37:33.122361   24723 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/ha-141516/id_rsa Username:docker}
	I0703 04:37:33.205172   24723 ssh_runner.go:195] Run: systemctl --version
	I0703 04:37:33.213003   24723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 04:37:33.229141   24723 kubeconfig.go:125] found "ha-141516" server: "https://192.168.39.254:8443"
	I0703 04:37:33.229172   24723 api_server.go:166] Checking apiserver status ...
	I0703 04:37:33.229211   24723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 04:37:33.244521   24723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	W0703 04:37:33.254127   24723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 04:37:33.254175   24723 ssh_runner.go:195] Run: ls
	I0703 04:37:33.258850   24723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0703 04:37:33.262963   24723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0703 04:37:33.262984   24723 status.go:422] ha-141516 apiserver status = Running (err=<nil>)
	I0703 04:37:33.262994   24723 status.go:257] ha-141516 status: &{Name:ha-141516 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 04:37:33.263016   24723 status.go:255] checking status of ha-141516-m02 ...
	I0703 04:37:33.263309   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.263340   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.277969   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0703 04:37:33.278368   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.278813   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.278834   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.279084   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.279266   24723 main.go:141] libmachine: (ha-141516-m02) Calling .GetState
	I0703 04:37:33.280716   24723 status.go:330] ha-141516-m02 host status = "Stopped" (err=<nil>)
	I0703 04:37:33.280733   24723 status.go:343] host is not running, skipping remaining checks
	I0703 04:37:33.280741   24723 status.go:257] ha-141516-m02 status: &{Name:ha-141516-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 04:37:33.280760   24723 status.go:255] checking status of ha-141516-m03 ...
	I0703 04:37:33.281048   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.281081   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.296133   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I0703 04:37:33.296514   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.296940   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.296959   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.297237   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.297379   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetState
	I0703 04:37:33.298618   24723 status.go:330] ha-141516-m03 host status = "Running" (err=<nil>)
	I0703 04:37:33.298635   24723 host.go:66] Checking if "ha-141516-m03" exists ...
	I0703 04:37:33.298915   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.298944   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.313076   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0703 04:37:33.313454   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.313914   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.313928   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.314260   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.314442   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetIP
	I0703 04:37:33.317824   24723 main.go:141] libmachine: (ha-141516-m03) DBG | domain ha-141516-m03 has defined MAC address 52:54:00:ab:d1:86 in network mk-ha-141516
	I0703 04:37:33.318240   24723 main.go:141] libmachine: (ha-141516-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:d1:86", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:33:55 +0000 UTC Type:0 Mac:52:54:00:ab:d1:86 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-141516-m03 Clientid:01:52:54:00:ab:d1:86}
	I0703 04:37:33.318275   24723 main.go:141] libmachine: (ha-141516-m03) DBG | domain ha-141516-m03 has defined IP address 192.168.39.212 and MAC address 52:54:00:ab:d1:86 in network mk-ha-141516
	I0703 04:37:33.318377   24723 host.go:66] Checking if "ha-141516-m03" exists ...
	I0703 04:37:33.318768   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.318819   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.334725   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0703 04:37:33.335208   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.335796   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.335825   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.336174   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.336349   24723 main.go:141] libmachine: (ha-141516-m03) Calling .DriverName
	I0703 04:37:33.336534   24723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 04:37:33.336555   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetSSHHostname
	I0703 04:37:33.339046   24723 main.go:141] libmachine: (ha-141516-m03) DBG | domain ha-141516-m03 has defined MAC address 52:54:00:ab:d1:86 in network mk-ha-141516
	I0703 04:37:33.339495   24723 main.go:141] libmachine: (ha-141516-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:d1:86", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:33:55 +0000 UTC Type:0 Mac:52:54:00:ab:d1:86 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:ha-141516-m03 Clientid:01:52:54:00:ab:d1:86}
	I0703 04:37:33.339522   24723 main.go:141] libmachine: (ha-141516-m03) DBG | domain ha-141516-m03 has defined IP address 192.168.39.212 and MAC address 52:54:00:ab:d1:86 in network mk-ha-141516
	I0703 04:37:33.339623   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetSSHPort
	I0703 04:37:33.339799   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetSSHKeyPath
	I0703 04:37:33.339987   24723 main.go:141] libmachine: (ha-141516-m03) Calling .GetSSHUsername
	I0703 04:37:33.340111   24723 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/ha-141516-m03/id_rsa Username:docker}
	I0703 04:37:33.421033   24723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 04:37:33.437920   24723 kubeconfig.go:125] found "ha-141516" server: "https://192.168.39.254:8443"
	I0703 04:37:33.437951   24723 api_server.go:166] Checking apiserver status ...
	I0703 04:37:33.437982   24723 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 04:37:33.453763   24723 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup
	W0703 04:37:33.464229   24723 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1263/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 04:37:33.464294   24723 ssh_runner.go:195] Run: ls
	I0703 04:37:33.468540   24723 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0703 04:37:33.472773   24723 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0703 04:37:33.472790   24723 status.go:422] ha-141516-m03 apiserver status = Running (err=<nil>)
	I0703 04:37:33.472797   24723 status.go:257] ha-141516-m03 status: &{Name:ha-141516-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 04:37:33.472812   24723 status.go:255] checking status of ha-141516-m04 ...
	I0703 04:37:33.473163   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.473202   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.487726   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44397
	I0703 04:37:33.488187   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.488668   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.488688   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.488970   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.489146   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetState
	I0703 04:37:33.490758   24723 status.go:330] ha-141516-m04 host status = "Running" (err=<nil>)
	I0703 04:37:33.490779   24723 host.go:66] Checking if "ha-141516-m04" exists ...
	I0703 04:37:33.491091   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.491131   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.508163   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35061
	I0703 04:37:33.508617   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.509183   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.509204   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.509553   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.509792   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetIP
	I0703 04:37:33.512747   24723 main.go:141] libmachine: (ha-141516-m04) DBG | domain ha-141516-m04 has defined MAC address 52:54:00:dd:89:c6 in network mk-ha-141516
	I0703 04:37:33.513136   24723 main.go:141] libmachine: (ha-141516-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:89:c6", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:35:16 +0000 UTC Type:0 Mac:52:54:00:dd:89:c6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-141516-m04 Clientid:01:52:54:00:dd:89:c6}
	I0703 04:37:33.513159   24723 main.go:141] libmachine: (ha-141516-m04) DBG | domain ha-141516-m04 has defined IP address 192.168.39.6 and MAC address 52:54:00:dd:89:c6 in network mk-ha-141516
	I0703 04:37:33.513326   24723 host.go:66] Checking if "ha-141516-m04" exists ...
	I0703 04:37:33.513743   24723 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:37:33.513786   24723 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:37:33.528906   24723 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36127
	I0703 04:37:33.529293   24723 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:37:33.529760   24723 main.go:141] libmachine: Using API Version  1
	I0703 04:37:33.529779   24723 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:37:33.530055   24723 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:37:33.530262   24723 main.go:141] libmachine: (ha-141516-m04) Calling .DriverName
	I0703 04:37:33.530430   24723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 04:37:33.530446   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetSSHHostname
	I0703 04:37:33.533144   24723 main.go:141] libmachine: (ha-141516-m04) DBG | domain ha-141516-m04 has defined MAC address 52:54:00:dd:89:c6 in network mk-ha-141516
	I0703 04:37:33.533568   24723 main.go:141] libmachine: (ha-141516-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:89:c6", ip: ""} in network mk-ha-141516: {Iface:virbr1 ExpiryTime:2024-07-03 05:35:16 +0000 UTC Type:0 Mac:52:54:00:dd:89:c6 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-141516-m04 Clientid:01:52:54:00:dd:89:c6}
	I0703 04:37:33.533594   24723 main.go:141] libmachine: (ha-141516-m04) DBG | domain ha-141516-m04 has defined IP address 192.168.39.6 and MAC address 52:54:00:dd:89:c6 in network mk-ha-141516
	I0703 04:37:33.533757   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetSSHPort
	I0703 04:37:33.533927   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetSSHKeyPath
	I0703 04:37:33.534072   24723 main.go:141] libmachine: (ha-141516-m04) Calling .GetSSHUsername
	I0703 04:37:33.534196   24723 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/ha-141516-m04/id_rsa Username:docker}
	I0703 04:37:33.616659   24723 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 04:37:33.632095   24723 status.go:257] ha-141516-m04 status: &{Name:ha-141516-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 node start m02 -v=7 --alsologtostderr
E0703 04:38:10.183413   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-141516 node start m02 -v=7 --alsologtostderr: (40.304887297s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-141516 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-141516 -v=7 --alsologtostderr
E0703 04:39:04.445695   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 04:40:26.340779   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:40:54.024194   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-141516 -v=7 --alsologtostderr: (4m35.943415948s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-141516 --wait=true -v=7 --alsologtostderr
E0703 04:44:04.444843   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-141516 --wait=true -v=7 --alsologtostderr: (2m33.079859436s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-141516
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 node delete m03 -v=7 --alsologtostderr
E0703 04:45:26.340761   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:45:27.490245   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-141516 node delete m03 -v=7 --alsologtostderr: (6.99707331s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 stop -v=7 --alsologtostderr
E0703 04:49:04.444654   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-141516 stop -v=7 --alsologtostderr: (4m34.467489628s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr: exit status 7 (99.722912ms)

                                                
                                                
-- stdout --
	ha-141516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-141516-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-141516-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 04:50:07.407679   28538 out.go:291] Setting OutFile to fd 1 ...
	I0703 04:50:07.408174   28538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:50:07.408227   28538 out.go:304] Setting ErrFile to fd 2...
	I0703 04:50:07.408244   28538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 04:50:07.408672   28538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 04:50:07.409102   28538 out.go:298] Setting JSON to false
	I0703 04:50:07.409129   28538 mustload.go:65] Loading cluster: ha-141516
	I0703 04:50:07.409173   28538 notify.go:220] Checking for updates...
	I0703 04:50:07.409472   28538 config.go:182] Loaded profile config "ha-141516": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 04:50:07.409489   28538 status.go:255] checking status of ha-141516 ...
	I0703 04:50:07.409826   28538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:50:07.409864   28538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:50:07.427546   28538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
	I0703 04:50:07.427941   28538 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:50:07.428599   28538 main.go:141] libmachine: Using API Version  1
	I0703 04:50:07.428627   28538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:50:07.429051   28538 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:50:07.429221   28538 main.go:141] libmachine: (ha-141516) Calling .GetState
	I0703 04:50:07.430735   28538 status.go:330] ha-141516 host status = "Stopped" (err=<nil>)
	I0703 04:50:07.430748   28538 status.go:343] host is not running, skipping remaining checks
	I0703 04:50:07.430754   28538 status.go:257] ha-141516 status: &{Name:ha-141516 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 04:50:07.430779   28538 status.go:255] checking status of ha-141516-m02 ...
	I0703 04:50:07.431034   28538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:50:07.431068   28538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:50:07.445198   28538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0703 04:50:07.445563   28538 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:50:07.445969   28538 main.go:141] libmachine: Using API Version  1
	I0703 04:50:07.445990   28538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:50:07.446277   28538 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:50:07.446443   28538 main.go:141] libmachine: (ha-141516-m02) Calling .GetState
	I0703 04:50:07.448005   28538 status.go:330] ha-141516-m02 host status = "Stopped" (err=<nil>)
	I0703 04:50:07.448023   28538 status.go:343] host is not running, skipping remaining checks
	I0703 04:50:07.448031   28538 status.go:257] ha-141516-m02 status: &{Name:ha-141516-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 04:50:07.448068   28538 status.go:255] checking status of ha-141516-m04 ...
	I0703 04:50:07.448356   28538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 04:50:07.448391   28538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 04:50:07.462168   28538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0703 04:50:07.462541   28538 main.go:141] libmachine: () Calling .GetVersion
	I0703 04:50:07.462982   28538 main.go:141] libmachine: Using API Version  1
	I0703 04:50:07.462999   28538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 04:50:07.463241   28538 main.go:141] libmachine: () Calling .GetMachineName
	I0703 04:50:07.463447   28538 main.go:141] libmachine: (ha-141516-m04) Calling .GetState
	I0703 04:50:07.464842   28538 status.go:330] ha-141516-m04 host status = "Stopped" (err=<nil>)
	I0703 04:50:07.464855   28538 status.go:343] host is not running, skipping remaining checks
	I0703 04:50:07.464862   28538 status.go:257] ha-141516-m04 status: &{Name:ha-141516-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (148.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-141516 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0703 04:50:26.340531   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 04:51:49.384442   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-141516 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m27.479288288s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (148.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (67.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-141516 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-141516 --control-plane -v=7 --alsologtostderr: (1m7.165795998s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-141516 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (67.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-718405 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0703 04:54:04.448664   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-718405 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (55.230802479s)
--- PASS: TestJSONOutput/start/Command (55.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-718405 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-718405 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.57s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-718405 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-718405 --output=json --user=testUser: (6.573213324s)
--- PASS: TestJSONOutput/stop/Command (6.57s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-850010 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-850010 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.458434ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d16785b-b33b-4bb4-826e-06a7a2ac36c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-850010] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a38d23e-a109-43df-85d8-b79ad5d63d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19184"}}
	{"specversion":"1.0","id":"12df0e34-1419-4438-8b09-abad663b8871","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b532a2ba-1f40-4141-a6d0-7fa103d7f670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig"}}
	{"specversion":"1.0","id":"a30f7208-046e-47b7-a152-c2f99d6c4961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube"}}
	{"specversion":"1.0","id":"a9a58322-68af-42c9-a949-e4b2e2b8cf1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d6e6417a-4afa-40e0-9a2e-585c1e5867dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4e7c6cd4-759a-464f-92a1-3acd30cb0d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-850010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-850010
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (96.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-758667 --driver=kvm2  --container-runtime=containerd
E0703 04:55:26.340481   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-758667 --driver=kvm2  --container-runtime=containerd: (45.504645761s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-761022 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-761022 --driver=kvm2  --container-runtime=containerd: (48.531945756s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-758667
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-761022
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-761022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-761022
helpers_test.go:175: Cleaning up "first-758667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-758667
--- PASS: TestMinikubeProfile (96.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-792926 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-792926 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.34556527s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-792926 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-792926 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-809073 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-809073 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.328188041s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-792926 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-809073
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-809073: (1.267754598s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-809073
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-809073: (23.074839768s)
--- PASS: TestMountStart/serial/RestartStopped (24.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-809073 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-718545 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0703 04:59:04.445703   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-718545 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m42.323364977s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-718545 -- rollout status deployment/busybox: (3.461392492s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-8v54t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-ggvcd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-8v54t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-ggvcd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-8v54t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-ggvcd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-8v54t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-8v54t -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-ggvcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-718545 -- exec busybox-fc5497c4f-ggvcd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-718545 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-718545 -v 3 --alsologtostderr: (42.608543856s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-718545 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp testdata/cp-test.txt multinode-718545:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3558214802/001/cp-test_multinode-718545.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545:/home/docker/cp-test.txt multinode-718545-m02:/home/docker/cp-test_multinode-718545_multinode-718545-m02.txt
E0703 05:00:26.340599   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test_multinode-718545_multinode-718545-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545:/home/docker/cp-test.txt multinode-718545-m03:/home/docker/cp-test_multinode-718545_multinode-718545-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test_multinode-718545_multinode-718545-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp testdata/cp-test.txt multinode-718545-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3558214802/001/cp-test_multinode-718545-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m02:/home/docker/cp-test.txt multinode-718545:/home/docker/cp-test_multinode-718545-m02_multinode-718545.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test_multinode-718545-m02_multinode-718545.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m02:/home/docker/cp-test.txt multinode-718545-m03:/home/docker/cp-test_multinode-718545-m02_multinode-718545-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test_multinode-718545-m02_multinode-718545-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp testdata/cp-test.txt multinode-718545-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3558214802/001/cp-test_multinode-718545-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m03:/home/docker/cp-test.txt multinode-718545:/home/docker/cp-test_multinode-718545-m03_multinode-718545.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545 "sudo cat /home/docker/cp-test_multinode-718545-m03_multinode-718545.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 cp multinode-718545-m03:/home/docker/cp-test.txt multinode-718545-m02:/home/docker/cp-test_multinode-718545-m03_multinode-718545-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 ssh -n multinode-718545-m02 "sudo cat /home/docker/cp-test_multinode-718545-m03_multinode-718545-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-718545 node stop m03: (1.305252387s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-718545 status: exit status 7 (410.398854ms)

                                                
                                                
-- stdout --
	multinode-718545
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718545-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718545-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr: exit status 7 (396.816845ms)

                                                
                                                
-- stdout --
	multinode-718545
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-718545-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-718545-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 05:00:33.592615   36018 out.go:291] Setting OutFile to fd 1 ...
	I0703 05:00:33.592720   36018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:00:33.592728   36018 out.go:304] Setting ErrFile to fd 2...
	I0703 05:00:33.592732   36018 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:00:33.592915   36018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 05:00:33.593058   36018 out.go:298] Setting JSON to false
	I0703 05:00:33.593080   36018 mustload.go:65] Loading cluster: multinode-718545
	I0703 05:00:33.593124   36018 notify.go:220] Checking for updates...
	I0703 05:00:33.593471   36018 config.go:182] Loaded profile config "multinode-718545": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 05:00:33.593488   36018 status.go:255] checking status of multinode-718545 ...
	I0703 05:00:33.593927   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.594001   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.612393   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0703 05:00:33.612841   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.613429   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.613455   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.613791   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.613983   36018 main.go:141] libmachine: (multinode-718545) Calling .GetState
	I0703 05:00:33.615546   36018 status.go:330] multinode-718545 host status = "Running" (err=<nil>)
	I0703 05:00:33.615562   36018 host.go:66] Checking if "multinode-718545" exists ...
	I0703 05:00:33.615890   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.615921   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.630680   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0703 05:00:33.631101   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.631545   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.631564   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.631835   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.632025   36018 main.go:141] libmachine: (multinode-718545) Calling .GetIP
	I0703 05:00:33.634213   36018 main.go:141] libmachine: (multinode-718545) DBG | domain multinode-718545 has defined MAC address 52:54:00:46:b4:17 in network mk-multinode-718545
	I0703 05:00:33.634517   36018 main.go:141] libmachine: (multinode-718545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b4:17", ip: ""} in network mk-multinode-718545: {Iface:virbr1 ExpiryTime:2024-07-03 05:58:07 +0000 UTC Type:0 Mac:52:54:00:46:b4:17 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-718545 Clientid:01:52:54:00:46:b4:17}
	I0703 05:00:33.634549   36018 main.go:141] libmachine: (multinode-718545) DBG | domain multinode-718545 has defined IP address 192.168.39.141 and MAC address 52:54:00:46:b4:17 in network mk-multinode-718545
	I0703 05:00:33.634640   36018 host.go:66] Checking if "multinode-718545" exists ...
	I0703 05:00:33.634915   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.634947   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.649285   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I0703 05:00:33.649648   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.650117   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.650135   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.650445   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.650591   36018 main.go:141] libmachine: (multinode-718545) Calling .DriverName
	I0703 05:00:33.650760   36018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 05:00:33.650788   36018 main.go:141] libmachine: (multinode-718545) Calling .GetSSHHostname
	I0703 05:00:33.653038   36018 main.go:141] libmachine: (multinode-718545) DBG | domain multinode-718545 has defined MAC address 52:54:00:46:b4:17 in network mk-multinode-718545
	I0703 05:00:33.653418   36018 main.go:141] libmachine: (multinode-718545) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:b4:17", ip: ""} in network mk-multinode-718545: {Iface:virbr1 ExpiryTime:2024-07-03 05:58:07 +0000 UTC Type:0 Mac:52:54:00:46:b4:17 Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:multinode-718545 Clientid:01:52:54:00:46:b4:17}
	I0703 05:00:33.653448   36018 main.go:141] libmachine: (multinode-718545) DBG | domain multinode-718545 has defined IP address 192.168.39.141 and MAC address 52:54:00:46:b4:17 in network mk-multinode-718545
	I0703 05:00:33.653532   36018 main.go:141] libmachine: (multinode-718545) Calling .GetSSHPort
	I0703 05:00:33.653688   36018 main.go:141] libmachine: (multinode-718545) Calling .GetSSHKeyPath
	I0703 05:00:33.653824   36018 main.go:141] libmachine: (multinode-718545) Calling .GetSSHUsername
	I0703 05:00:33.653929   36018 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/multinode-718545/id_rsa Username:docker}
	I0703 05:00:33.730812   36018 ssh_runner.go:195] Run: systemctl --version
	I0703 05:00:33.736701   36018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 05:00:33.750264   36018 kubeconfig.go:125] found "multinode-718545" server: "https://192.168.39.141:8443"
	I0703 05:00:33.750286   36018 api_server.go:166] Checking apiserver status ...
	I0703 05:00:33.750317   36018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 05:00:33.763229   36018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0703 05:00:33.772664   36018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 05:00:33.772719   36018 ssh_runner.go:195] Run: ls
	I0703 05:00:33.776994   36018 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I0703 05:00:33.781283   36018 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I0703 05:00:33.781299   36018 status.go:422] multinode-718545 apiserver status = Running (err=<nil>)
	I0703 05:00:33.781309   36018 status.go:257] multinode-718545 status: &{Name:multinode-718545 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 05:00:33.781328   36018 status.go:255] checking status of multinode-718545-m02 ...
	I0703 05:00:33.781602   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.781631   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.796689   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I0703 05:00:33.797107   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.797544   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.797565   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.797841   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.798017   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetState
	I0703 05:00:33.799436   36018 status.go:330] multinode-718545-m02 host status = "Running" (err=<nil>)
	I0703 05:00:33.799452   36018 host.go:66] Checking if "multinode-718545-m02" exists ...
	I0703 05:00:33.799749   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.799791   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.814154   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I0703 05:00:33.814543   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.814978   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.814997   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.815280   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.815614   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetIP
	I0703 05:00:33.818222   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | domain multinode-718545-m02 has defined MAC address 52:54:00:94:36:00 in network mk-multinode-718545
	I0703 05:00:33.818595   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:36:00", ip: ""} in network mk-multinode-718545: {Iface:virbr1 ExpiryTime:2024-07-03 05:59:09 +0000 UTC Type:0 Mac:52:54:00:94:36:00 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-718545-m02 Clientid:01:52:54:00:94:36:00}
	I0703 05:00:33.818631   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | domain multinode-718545-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:36:00 in network mk-multinode-718545
	I0703 05:00:33.818781   36018 host.go:66] Checking if "multinode-718545-m02" exists ...
	I0703 05:00:33.819067   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.819123   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.833359   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0703 05:00:33.833689   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.834106   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.834126   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.834459   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.834620   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .DriverName
	I0703 05:00:33.834772   36018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 05:00:33.834791   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetSSHHostname
	I0703 05:00:33.837563   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | domain multinode-718545-m02 has defined MAC address 52:54:00:94:36:00 in network mk-multinode-718545
	I0703 05:00:33.837957   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:36:00", ip: ""} in network mk-multinode-718545: {Iface:virbr1 ExpiryTime:2024-07-03 05:59:09 +0000 UTC Type:0 Mac:52:54:00:94:36:00 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-718545-m02 Clientid:01:52:54:00:94:36:00}
	I0703 05:00:33.837985   36018 main.go:141] libmachine: (multinode-718545-m02) DBG | domain multinode-718545-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:94:36:00 in network mk-multinode-718545
	I0703 05:00:33.838134   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetSSHPort
	I0703 05:00:33.838315   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetSSHKeyPath
	I0703 05:00:33.838474   36018 main.go:141] libmachine: (multinode-718545-m02) Calling .GetSSHUsername
	I0703 05:00:33.838573   36018 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19184-3680/.minikube/machines/multinode-718545-m02/id_rsa Username:docker}
	I0703 05:00:33.918642   36018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 05:00:33.932292   36018 status.go:257] multinode-718545-m02 status: &{Name:multinode-718545-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0703 05:00:33.932332   36018 status.go:255] checking status of multinode-718545-m03 ...
	I0703 05:00:33.932650   36018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:00:33.932702   36018 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:00:33.947520   36018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36781
	I0703 05:00:33.947957   36018 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:00:33.948432   36018 main.go:141] libmachine: Using API Version  1
	I0703 05:00:33.948453   36018 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:00:33.948755   36018 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:00:33.948929   36018 main.go:141] libmachine: (multinode-718545-m03) Calling .GetState
	I0703 05:00:33.950394   36018 status.go:330] multinode-718545-m03 host status = "Stopped" (err=<nil>)
	I0703 05:00:33.950407   36018 status.go:343] host is not running, skipping remaining checks
	I0703 05:00:33.950413   36018 status.go:257] multinode-718545-m03 status: &{Name:multinode-718545-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-718545 node start m03 -v=7 --alsologtostderr: (24.22956257s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (291.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-718545
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-718545
E0703 05:02:07.490838   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-718545: (3m4.19708822s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-718545 --wait=true -v=8 --alsologtostderr
E0703 05:04:04.445567   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 05:05:26.340717   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-718545 --wait=true -v=8 --alsologtostderr: (1m47.50483107s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-718545
--- PASS: TestMultiNode/serial/RestartKeepsNodes (291.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-718545 node delete m03: (1.766757449s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 stop
E0703 05:08:29.386180   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-718545 stop: (3m2.984970338s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-718545 status: exit status 7 (81.963497ms)

                                                
                                                
-- stdout --
	multinode-718545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718545-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr: exit status 7 (81.755468ms)

                                                
                                                
-- stdout --
	multinode-718545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-718545-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 05:08:55.935352   38592 out.go:291] Setting OutFile to fd 1 ...
	I0703 05:08:55.935596   38592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:08:55.935605   38592 out.go:304] Setting ErrFile to fd 2...
	I0703 05:08:55.935609   38592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:08:55.935780   38592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 05:08:55.935927   38592 out.go:298] Setting JSON to false
	I0703 05:08:55.935943   38592 mustload.go:65] Loading cluster: multinode-718545
	I0703 05:08:55.936049   38592 notify.go:220] Checking for updates...
	I0703 05:08:55.936274   38592 config.go:182] Loaded profile config "multinode-718545": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 05:08:55.936286   38592 status.go:255] checking status of multinode-718545 ...
	I0703 05:08:55.936650   38592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:08:55.936716   38592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:08:55.956756   38592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0703 05:08:55.957221   38592 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:08:55.957746   38592 main.go:141] libmachine: Using API Version  1
	I0703 05:08:55.957766   38592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:08:55.958096   38592 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:08:55.958288   38592 main.go:141] libmachine: (multinode-718545) Calling .GetState
	I0703 05:08:55.959843   38592 status.go:330] multinode-718545 host status = "Stopped" (err=<nil>)
	I0703 05:08:55.959857   38592 status.go:343] host is not running, skipping remaining checks
	I0703 05:08:55.959875   38592 status.go:257] multinode-718545 status: &{Name:multinode-718545 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 05:08:55.959901   38592 status.go:255] checking status of multinode-718545-m02 ...
	I0703 05:08:55.960181   38592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0703 05:08:55.960219   38592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 05:08:55.974760   38592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I0703 05:08:55.975099   38592 main.go:141] libmachine: () Calling .GetVersion
	I0703 05:08:55.975604   38592 main.go:141] libmachine: Using API Version  1
	I0703 05:08:55.975638   38592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 05:08:55.975968   38592 main.go:141] libmachine: () Calling .GetMachineName
	I0703 05:08:55.976147   38592 main.go:141] libmachine: (multinode-718545-m02) Calling .GetState
	I0703 05:08:55.977518   38592 status.go:330] multinode-718545-m02 host status = "Stopped" (err=<nil>)
	I0703 05:08:55.977531   38592 status.go:343] host is not running, skipping remaining checks
	I0703 05:08:55.977537   38592 status.go:257] multinode-718545-m02 status: &{Name:multinode-718545-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-718545 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0703 05:09:04.445133   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-718545 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m21.620495919s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-718545 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.13s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-718545
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-718545-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-718545-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (57.882063ms)

                                                
                                                
-- stdout --
	* [multinode-718545-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-718545-m02' is duplicated with machine name 'multinode-718545-m02' in profile 'multinode-718545'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-718545-m03 --driver=kvm2  --container-runtime=containerd
E0703 05:10:26.341287   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-718545-m03 --driver=kvm2  --container-runtime=containerd: (41.30952056s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-718545
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-718545: exit status 80 (201.951345ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-718545 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-718545-m03 already exists in multinode-718545-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-718545-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-718545-m03: (1.00105282s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.61s)

                                                
                                    
x
+
TestPreload (313.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-407301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-407301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m36.190644933s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-407301 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-407301 image pull gcr.io/k8s-minikube/busybox: (2.407522234s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-407301
E0703 05:14:04.448596   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-407301: (1m31.451080188s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-407301 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0703 05:15:26.341918   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-407301 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m2.253843327s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-407301 image list
helpers_test.go:175: Cleaning up "test-preload-407301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-407301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-407301: (1.045362608s)
--- PASS: TestPreload (313.57s)

                                                
                                    
x
+
TestScheduledStopUnix (114.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-970266 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-970266 --memory=2048 --driver=kvm2  --container-runtime=containerd: (42.617758953s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-970266 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-970266 -n scheduled-stop-970266
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-970266 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-970266 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-970266 -n scheduled-stop-970266
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-970266
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-970266 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-970266
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-970266: exit status 7 (64.23806ms)

                                                
                                                
-- stdout --
	scheduled-stop-970266
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-970266 -n scheduled-stop-970266
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-970266 -n scheduled-stop-970266: exit status 7 (63.617458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-970266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-970266
--- PASS: TestScheduledStopUnix (114.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (181.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2983926100 start -p running-upgrade-541478 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2983926100 start -p running-upgrade-541478 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m2.634191129s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-541478 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-541478 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m55.083704087s)
helpers_test.go:175: Cleaning up "running-upgrade-541478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-541478
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-541478: (1.17785513s)
--- PASS: TestRunningBinaryUpgrade (181.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (178.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (59.860782965s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-002074
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-002074: (1.533227764s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-002074 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-002074 status --format={{.Host}}: exit status 7 (71.965934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.243288072s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-002074 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (82.980004ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-002074] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-002074
	    minikube start -p kubernetes-upgrade-002074 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0020742 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-002074 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-002074 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (40.237140481s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-002074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-002074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-002074: (1.163478558s)
--- PASS: TestKubernetesUpgrade (178.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-240988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-240988 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (97.432067ms)

                                                
                                                
-- stdout --
	* [false-240988] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 05:18:12.833847   43621 out.go:291] Setting OutFile to fd 1 ...
	I0703 05:18:12.834204   43621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:18:12.834220   43621 out.go:304] Setting ErrFile to fd 2...
	I0703 05:18:12.834227   43621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 05:18:12.834658   43621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19184-3680/.minikube/bin
	I0703 05:18:12.835497   43621 out.go:298] Setting JSON to false
	I0703 05:18:12.836370   43621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3637,"bootTime":1719980256,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 05:18:12.836426   43621 start.go:139] virtualization: kvm guest
	I0703 05:18:12.838230   43621 out.go:177] * [false-240988] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 05:18:12.839957   43621 out.go:177]   - MINIKUBE_LOCATION=19184
	I0703 05:18:12.839961   43621 notify.go:220] Checking for updates...
	I0703 05:18:12.841425   43621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 05:18:12.842776   43621 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	I0703 05:18:12.843992   43621 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	I0703 05:18:12.845205   43621 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 05:18:12.846447   43621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 05:18:12.848008   43621 config.go:182] Loaded profile config "force-systemd-flag-099672": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 05:18:12.848113   43621 config.go:182] Loaded profile config "kubernetes-upgrade-002074": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0703 05:18:12.848215   43621 config.go:182] Loaded profile config "offline-containerd-993646": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 05:18:12.848320   43621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 05:18:12.882972   43621 out.go:177] * Using the kvm2 driver based on user configuration
	I0703 05:18:12.884253   43621 start.go:297] selected driver: kvm2
	I0703 05:18:12.884266   43621 start.go:901] validating driver "kvm2" against <nil>
	I0703 05:18:12.884276   43621 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 05:18:12.886445   43621 out.go:177] 
	W0703 05:18:12.887681   43621 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0703 05:18:12.888875   43621 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-240988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-240988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240988"

                                                
                                                
----------------------- debugLogs end: false-240988 [took: 2.506153001s] --------------------------------
helpers_test.go:175: Cleaning up "false-240988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-240988
--- PASS: TestNetworkPlugins/group/false (2.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (164.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.633425535 start -p stopped-upgrade-871870 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0703 05:20:26.341175   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.633425535 start -p stopped-upgrade-871870 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m24.912649663s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.633425535 -p stopped-upgrade-871870 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.633425535 -p stopped-upgrade-871870 stop: (1.368152745s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-871870 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-871870 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m18.355557302s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (164.64s)

                                                
                                    
x
+
TestPause/serial/Start (126.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-039109 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-039109 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m6.019652671s)
--- PASS: TestPause/serial/Start (126.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-871870
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (57.880745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-715351] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19184
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19184-3680/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19184-3680/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-715351 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-715351 --driver=kvm2  --container-runtime=containerd: (47.880433251s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-715351 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-039109 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-039109 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.024568124s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (38.813578994s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-715351 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-715351 status -o json: exit status 2 (307.696713ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-715351","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-715351
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m40.488550736s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.49s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-039109 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-039109 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-039109 --output=json --layout=cluster: exit status 2 (290.16819ms)

                                                
                                                
-- stdout --
	{"Name":"pause-039109","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-039109","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-039109 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-039109 --alsologtostderr -v=5: (1.018736406s)
--- PASS: TestPause/serial/Unpause (1.02s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-039109 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-039109 --alsologtostderr -v=5: (1.547502725s)
--- PASS: TestPause/serial/PauseAgain (1.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-039109 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.973494256s)
--- PASS: TestPause/serial/VerifyDeletedResources (17.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m5.649795582s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-715351 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (51.484922272s)
--- PASS: TestNoKubernetes/serial/Start (51.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (129.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0703 05:25:09.387636   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m9.687536287s)
--- PASS: TestNetworkPlugins/group/calico/Start (129.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-715351 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-715351 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.2812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-715351
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-715351: (1.338994431s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-715351 --driver=kvm2  --container-runtime=containerd
E0703 05:25:26.341163   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-715351 --driver=kvm2  --container-runtime=containerd: (43.988630507s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-m2tjm" [2f69ede7-c273-4fe1-9b00-fc5a436cad45] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005088354s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qrqdq" [1af2d47b-888e-40dd-8924-7764d48f81e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qrqdq" [1af2d47b-888e-40dd-8924-7764d48f81e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004820537s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hllld" [fb121b28-1c10-4405-bf71-b9e320cff064] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hllld" [fb121b28-1c10-4405-bf71-b9e320cff064] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00492799s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m24.089369539s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-715351 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-715351 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.288355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m36.056375931s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (134.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m14.015525253s)
--- PASS: TestNetworkPlugins/group/flannel/Start (134.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-87wlf" [e03da2b2-b274-412e-ae0f-03d51846f2eb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005588486s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6kdqv" [93cc66e6-686c-409f-9f3a-a84b1492df3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6kdqv" [93cc66e6-686c-409f-9f3a-a84b1492df3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004425812s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-240988 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m17.054734573s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jwvpq" [e5da5810-0f5b-4db3-8698-952af3c5a6e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jwvpq" [e5da5810-0f5b-4db3-8698-952af3c5a6e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005401712s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gslqx" [007caffc-0645-471f-945e-0c4975fe8d9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gslqx" [007caffc-0645-471f-945e-0c4975fe8d9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004666976s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (170.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-354485 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-354485 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m50.486093159s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (170.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (131s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-886751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-886751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (2m10.997274148s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (131.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-snprv" [5d1a6070-5eda-4a67-9857-61da44fcd37b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004741297s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gw6cl" [5a147e8e-0f71-4b75-aebd-d6a5b28d2e00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gw6cl" [5a147e8e-0f71-4b75-aebd-d6a5b28d2e00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004093548s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-240988 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-240988 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kgkws" [e83ed0cf-8d56-4c1a-9a75-546f34ab4faa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kgkws" [e83ed0cf-8d56-4c1a-9a75-546f34ab4faa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005362869s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-240988 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-240988 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)
E0703 05:37:45.086482   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-804555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:29:04.445245   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-804555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m45.882147357s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (105.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (85.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-286411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-286411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m25.881969936s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (85.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-886751 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f43586bd-dc87-46d3-987e-a3a32f3d87ec] Pending
helpers_test.go:344: "busybox" [f43586bd-dc87-46d3-987e-a3a32f3d87ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0703 05:30:26.341402   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f43586bd-dc87-46d3-987e-a3a32f3d87ec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006471958s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-886751 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-286411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-286411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079005098s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-886751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-886751 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.029659489s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-886751 describe deploy/metrics-server -n kube-system
E0703 05:30:33.821647   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:33.827395   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:33.837701   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:33.857995   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-286411 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-286411 --alsologtostderr -v=3: (2.328327033s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-886751 --alsologtostderr -v=3
E0703 05:30:33.898662   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:33.979580   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:34.139962   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:34.460296   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:35.100915   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-886751 --alsologtostderr -v=3: (1m31.753590007s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286411 -n newest-cni-286411
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286411 -n newest-cni-286411: exit status 7 (60.340083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-286411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-286411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:30:36.381579   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:38.941769   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:44.062886   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:30:46.373028   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.378361   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.388650   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.408977   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.449289   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.529656   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:46.690067   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:47.010923   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:47.651071   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:30:48.931796   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-286411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (31.514615444s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286411 -n newest-cni-286411
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-804555 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1740e4bc-5743-4cf5-bffa-dfd372d3176e] Pending
helpers_test.go:344: "busybox" [1740e4bc-5743-4cf5-bffa-dfd372d3176e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0703 05:30:51.491982   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1740e4bc-5743-4cf5-bffa-dfd372d3176e] Running
E0703 05:30:54.303299   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003658764s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-804555 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-354485 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca03ce1c-96b6-44a6-8bbc-359377653ee5] Pending
helpers_test.go:344: "busybox" [ca03ce1c-96b6-44a6-8bbc-359377653ee5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca03ce1c-96b6-44a6-8bbc-359377653ee5] Running
E0703 05:30:56.612702   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004308593s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-354485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-804555 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-804555 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-354485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-354485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053501129s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-354485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-804555 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-804555 --alsologtostderr -v=3: (1m32.458142114s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-354485 --alsologtostderr -v=3
E0703 05:31:06.853905   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-354485 --alsologtostderr -v=3: (1m31.781399152s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-286411 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-286411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286411 -n newest-cni-286411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286411 -n newest-cni-286411: exit status 2 (229.796775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286411 -n newest-cni-286411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286411 -n newest-cni-286411: exit status 2 (224.210746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-286411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286411 -n newest-cni-286411
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286411 -n newest-cni-286411
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-933907 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:31:14.784172   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:31:27.334840   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:31:43.710717   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:43.716040   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:43.726310   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:43.746604   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:43.786938   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:43.867545   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:44.027948   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:44.348686   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:44.989774   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:46.270514   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:48.831542   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:53.952042   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:31:55.745109   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:32:04.192493   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-933907 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (58.530021698s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-886751 -n no-preload-886751
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-886751 -n no-preload-886751: exit status 7 (67.622712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-886751 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (317.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-886751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:32:08.295520   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-886751 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (5m17.291382197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-886751 -n no-preload-886751
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (317.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-933907 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d73b7166-0280-4e49-8f5d-0141121082fc] Pending
helpers_test.go:344: "busybox" [d73b7166-0280-4e49-8f5d-0141121082fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d73b7166-0280-4e49-8f5d-0141121082fc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004223904s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-933907 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-933907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-933907 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-933907 --alsologtostderr -v=3
E0703 05:32:24.672869   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:32:31.474868   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.480135   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.490643   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.510902   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.551329   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.631543   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:31.791982   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:32.112569   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-933907 --alsologtostderr -v=3: (1m31.632726733s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555: exit status 7 (63.578289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-804555 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-804555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:32:32.752873   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-804555 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (5m19.369375844s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-354485 -n old-k8s-version-354485
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-354485 -n old-k8s-version-354485: exit status 7 (63.686601ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-354485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (450.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-354485 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0703 05:32:34.033406   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:36.594267   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:41.715453   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:45.086024   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.091265   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.101503   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.121864   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.162130   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.242477   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.402853   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:45.723554   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:46.364613   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:47.645248   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:50.206264   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:32:51.956554   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:32:55.326484   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:33:05.567344   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:33:05.633594   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:33:12.437219   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:33:17.666091   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:33:26.048477   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:33:28.860730   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:28.865984   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:28.876304   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:28.896628   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:28.936901   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:29.017703   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:29.178187   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:29.499272   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:30.139968   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:30.216193   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:33:31.420852   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:33.981297   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:36.123683   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.128923   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.139246   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.159599   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.199932   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.280247   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.440642   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:36.761358   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:37.401796   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:38.682935   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:39.101535   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:33:41.243586   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:46.364496   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:33:49.342146   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-354485 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (7m30.02272177s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-354485 -n old-k8s-version-354485
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (450.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-933907 -n embed-certs-933907
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-933907 -n embed-certs-933907: exit status 7 (68.617461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-933907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-933907 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 05:33:53.397693   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:33:56.605030   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:34:04.445227   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 05:34:07.008756   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:34:09.822298   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:34:17.085724   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:34:27.554741   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:34:50.783369   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:34:58.046722   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:35:15.318415   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
E0703 05:35:26.340540   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/functional-502505/client.crt: no such file or directory
E0703 05:35:27.492677   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/addons-832832/client.crt: no such file or directory
E0703 05:35:28.929802   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/enable-default-cni-240988/client.crt: no such file or directory
E0703 05:35:33.822255   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:35:46.374038   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:36:01.506580   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/kindnet-240988/client.crt: no such file or directory
E0703 05:36:12.703586   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
E0703 05:36:14.056794   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/auto-240988/client.crt: no such file or directory
E0703 05:36:19.967692   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
E0703 05:36:43.710712   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
E0703 05:37:11.395519   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/calico-240988/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-933907 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m57.308351987s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-933907 -n embed-certs-933907
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-94n2c" [327074ff-9dae-4c1e-bf01-d9b87442a203] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005159583s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-94n2c" [327074ff-9dae-4c1e-bf01-d9b87442a203] Running
E0703 05:37:31.474709   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004063162s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-886751 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-886751 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-886751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-886751 -n no-preload-886751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-886751 -n no-preload-886751: exit status 2 (235.131762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-886751 -n no-preload-886751
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-886751 -n no-preload-886751: exit status 2 (244.697315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-886751 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-886751 -n no-preload-886751
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-886751 -n no-preload-886751
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-4pvnc" [2adfe7e4-3338-4059-8755-f777115e5d25] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004223824s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-4pvnc" [2adfe7e4-3338-4059-8755-f777115e5d25] Running
E0703 05:37:59.159563   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/custom-flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004329527s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-804555 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-804555 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-804555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555: exit status 2 (244.579727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555: exit status 2 (250.439892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-804555 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-804555 -n default-k8s-diff-port-804555
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bt28d" [a90aa30b-e911-4049-abf0-60529bd23fe2] Running
E0703 05:38:56.544658   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/flannel-240988/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004496615s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bt28d" [a90aa30b-e911-4049-abf0-60529bd23fe2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004864148s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-933907 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-933907 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-933907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-933907 -n embed-certs-933907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-933907 -n embed-certs-933907: exit status 2 (233.031324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-933907 -n embed-certs-933907
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-933907 -n embed-certs-933907: exit status 2 (238.930783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-933907 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-933907 -n embed-certs-933907
E0703 05:39:03.808860   10844 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19184-3680/.minikube/profiles/bridge-240988/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-933907 -n embed-certs-933907
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rmrsh" [35f66e77-2696-4d5b-b200-e6b69dc2c960] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004281801s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-rmrsh" [35f66e77-2696-4d5b-b200-e6b69dc2c960] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005065393s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-354485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-354485 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-354485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-354485 -n old-k8s-version-354485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-354485 -n old-k8s-version-354485: exit status 2 (230.623552ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-354485 -n old-k8s-version-354485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-354485 -n old-k8s-version-354485: exit status 2 (227.740054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-354485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-354485 -n old-k8s-version-354485
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-354485 -n old-k8s-version-354485
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.33s)

                                                
                                    

Test skip (36/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
252 TestNetworkPlugins/group/kubenet 2.77
260 TestNetworkPlugins/group/cilium 2.99
266 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-240988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-240988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240988"

                                                
                                                
----------------------- debugLogs end: kubenet-240988 [took: 2.645821689s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-240988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-240988
--- SKIP: TestNetworkPlugins/group/kubenet (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-240988 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-240988" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-240988

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-240988" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240988"

                                                
                                                
----------------------- debugLogs end: cilium-240988 [took: 2.866495982s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-240988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-240988
--- SKIP: TestNetworkPlugins/group/cilium (2.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-826756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-826756
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard