Test Report: KVM_Linux_containerd 20602

                    
                      a90248a4a931d52b681e38138304d5427e54b74a:2025-04-07:39037
                    
                

Test fail (1/329)

Order failed test Duration
90 TestFunctional/parallel/DashboardCmd 5.24
x
+
TestFunctional/parallel/DashboardCmd (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1]
functional_test.go:935: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] ...
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] stdout:
functional_test.go:927: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233546 --alsologtostderr -v=1] stderr:
I0407 12:16:09.712228 1250704 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:09.713109 1250704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.713150 1250704 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:09.713176 1250704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:09.713743 1250704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:09.714540 1250704 mustload.go:65] Loading cluster: functional-233546
I0407 12:16:09.714968 1250704 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:09.715328 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.715382 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.732691 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
I0407 12:16:09.733379 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.734066 1250704 main.go:141] libmachine: Using API Version  1
I0407 12:16:09.734094 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.734613 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.734875 1250704 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:09.736903 1250704 host.go:66] Checking if "functional-233546" exists ...
I0407 12:16:09.737369 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.737433 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.754926 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
I0407 12:16:09.755458 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.755971 1250704 main.go:141] libmachine: Using API Version  1
I0407 12:16:09.755995 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.756396 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.756625 1250704 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.756820 1250704 api_server.go:166] Checking apiserver status ...
I0407 12:16:09.756890 1250704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 12:16:09.756927 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:09.760034 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.760474 1250704 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:09.760517 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.760677 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:09.760907 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:09.761097 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:09.761273 1250704 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:09.876087 1250704 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4540/cgroup
W0407 12:16:09.903303 1250704 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4540/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0407 12:16:09.903369 1250704 ssh_runner.go:195] Run: ls
I0407 12:16:09.924918 1250704 api_server.go:253] Checking apiserver healthz at https://192.168.39.145:8441/healthz ...
I0407 12:16:09.929539 1250704 api_server.go:279] https://192.168.39.145:8441/healthz returned 200:
ok
W0407 12:16:09.929601 1250704 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0407 12:16:09.929827 1250704 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:09.929851 1250704 addons.go:69] Setting dashboard=true in profile "functional-233546"
I0407 12:16:09.929863 1250704 addons.go:238] Setting addon dashboard=true in "functional-233546"
I0407 12:16:09.929895 1250704 host.go:66] Checking if "functional-233546" exists ...
I0407 12:16:09.930324 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.930376 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.947634 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
I0407 12:16:09.948284 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.948962 1250704 main.go:141] libmachine: Using API Version  1
I0407 12:16:09.948985 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.949442 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.950419 1250704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:09.950491 1250704 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:09.967269 1250704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41457
I0407 12:16:09.967771 1250704 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:09.968214 1250704 main.go:141] libmachine: Using API Version  1
I0407 12:16:09.968238 1250704 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:09.968585 1250704 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:09.968782 1250704 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:09.970341 1250704 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:09.972443 1250704 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0407 12:16:09.974121 1250704 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0407 12:16:09.975645 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0407 12:16:09.975669 1250704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0407 12:16:09.975696 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:09.979030 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.979535 1250704 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:09.979566 1250704 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:09.979757 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:09.980028 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:09.980193 1250704 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:09.980349 1250704 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:10.121650 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0407 12:16:10.121673 1250704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0407 12:16:10.154775 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0407 12:16:10.154802 1250704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0407 12:16:10.183661 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0407 12:16:10.183684 1250704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0407 12:16:10.232604 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0407 12:16:10.232630 1250704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0407 12:16:10.260457 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0407 12:16:10.260491 1250704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0407 12:16:10.300634 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0407 12:16:10.300680 1250704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0407 12:16:10.327854 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0407 12:16:10.327885 1250704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0407 12:16:10.360516 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0407 12:16:10.360540 1250704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0407 12:16:10.381534 1250704 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0407 12:16:10.381563 1250704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0407 12:16:10.404451 1250704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 12:16:11.694908 1250704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.290394476s)
I0407 12:16:11.695003 1250704 main.go:141] libmachine: Making call to close driver server
I0407 12:16:11.695020 1250704 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:11.695350 1250704 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:11.695357 1250704 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:11.695371 1250704 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:11.695382 1250704 main.go:141] libmachine: Making call to close driver server
I0407 12:16:11.695394 1250704 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:11.695636 1250704 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:11.695652 1250704 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:11.697287 1250704 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-233546 addons enable metrics-server

                                                
                                                
I0407 12:16:11.698358 1250704 addons.go:201] Writing out "functional-233546" config to set dashboard=true...
W0407 12:16:11.698567 1250704 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0407 12:16:11.699185 1250704 kapi.go:59] client config for functional-233546: &rest.Config{Host:"https://192.168.39.145:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt", KeyFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.key", CAFile:"/home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24968c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0407 12:16:11.699570 1250704 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0407 12:16:11.699592 1250704 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0407 12:16:11.699608 1250704 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0407 12:16:11.699613 1250704 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0407 12:16:11.738429 1250704 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  64b6e7e9-dc75-4890-9e13-a0f119fa2f10 752 0 2025-04-07 12:16:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-04-07 12:16:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.216.199,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.216.199],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0407 12:16:11.738591 1250704 out.go:270] * Launching proxy ...
* Launching proxy ...
I0407 12:16:11.738661 1250704 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-233546 proxy --port 36195]
I0407 12:16:11.738969 1250704 dashboard.go:157] Waiting for kubectl to output host:port ...
I0407 12:16:11.793540 1250704 out.go:201] 
W0407 12:16:11.794993 1250704 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0407 12:16:11.795017 1250704 out.go:270] * 
* 
W0407 12:16:11.799163 1250704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_profile_d1ca4947b8443d05a16ba2db66e65ef843e55a01_0.log                 │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_profile_d1ca4947b8443d05a16ba2db66e65ef843e55a01_0.log                 │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 12:16:11.800968 1250704 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-233546 -n functional-233546
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 logs -n 25: (2.096770394s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache     | delete                                                                   | minikube          | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
	|           | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache     | delete                                                                   | minikube          | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
	|           | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl   | functional-233546 kubectl --                                             | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:15 UTC |
	|           | --context functional-233546                                              |                   |         |         |                     |                     |
	|           | get pods                                                                 |                   |         |         |                     |                     |
	| start     | -p functional-233546                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:15 UTC | 07 Apr 25 12:16 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|           | --wait=all                                                               |                   |         |         |                     |                     |
	| service   | invalid-svc -p                                                           | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | functional-233546                                                        |                   |         |         |                     |                     |
	| cp        | functional-233546 cp                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config    | functional-233546 config unset                                           | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-233546 config get                                             | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-233546 config set                                             | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | cpus 2                                                                   |                   |         |         |                     |                     |
	| config    | functional-233546 config get                                             | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh -n                                                 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | functional-233546 sudo cat                                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config    | functional-233546 config unset                                           | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-233546 config get                                             | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-233546                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| cp        | functional-233546 cp                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | functional-233546:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|           | /tmp/TestFunctionalparallelCpCmd4024997346/001/cp-test.txt               |                   |         |         |                     |                     |
	| start     | -p functional-233546                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start     | -p functional-233546                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh -n                                                 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | functional-233546 sudo cat                                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | -p functional-233546                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| cp        | functional-233546 cp                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh -n                                                 | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | functional-233546 sudo cat                                               |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh echo                                               | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | hello                                                                    |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh cat                                                | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC | 07 Apr 25 12:16 UTC |
	|           | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh       | functional-233546 ssh findmnt                                            | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-233546                                                     | functional-233546 | jenkins | v1.35.0 | 07 Apr 25 12:16 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port3835906352/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:16:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:16:09.533843 1250634 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:16:09.533992 1250634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.534005 1250634 out.go:358] Setting ErrFile to fd 2...
	I0407 12:16:09.534014 1250634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.534488 1250634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:16:09.535299 1250634 out.go:352] Setting JSON to false
	I0407 12:16:09.536680 1250634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28716,"bootTime":1743999454,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:16:09.536770 1250634 start.go:139] virtualization: kvm guest
	I0407 12:16:09.540255 1250634 out.go:177] * [functional-233546] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:16:09.541582 1250634 notify.go:220] Checking for updates...
	I0407 12:16:09.541607 1250634 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:16:09.542911 1250634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:16:09.544355 1250634 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 12:16:09.545753 1250634 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 12:16:09.547015 1250634 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:16:09.548247 1250634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:16:09.550034 1250634 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:16:09.550730 1250634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.550823 1250634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.569636 1250634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0407 12:16:09.570395 1250634 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.571017 1250634 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.571038 1250634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.571949 1250634 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.572119 1250634 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.572356 1250634 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:16:09.572665 1250634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.572699 1250634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.601162 1250634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0407 12:16:09.601731 1250634 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.602323 1250634 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.602344 1250634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.602663 1250634 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.602843 1250634 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.640492 1250634 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 12:16:09.641925 1250634 start.go:297] selected driver: kvm2
	I0407 12:16:09.641948 1250634 start.go:901] validating driver "kvm2" against &{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:16:09.642050 1250634 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:16:09.643103 1250634 cni.go:84] Creating CNI manager for ""
	I0407 12:16:09.643157 1250634 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0407 12:16:09.643208 1250634 start.go:340] cluster config:
	{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:16:09.645376 1250634 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	95ae599649ebd       82e4c8a736a4f       Less than a second ago   Running             echoserver                0                   540ed9b3608ac       hello-node-fcfd88b6f-2mtcl
	3426251924490       6e38f40d628db       14 seconds ago           Running             storage-provisioner       4                   5f64d91128370       storage-provisioner
	b898d99e3b3ae       f1332858868e1       30 seconds ago           Running             kube-proxy                2                   36e46290dc1a8       kube-proxy-5r4lm
	4df8c896b4853       6e38f40d628db       30 seconds ago           Exited              storage-provisioner       3                   5f64d91128370       storage-provisioner
	eb4b247cd1c35       85b7a174738ba       33 seconds ago           Running             kube-apiserver            0                   bf9cbf0b23e92       kube-apiserver-functional-233546
	2bff6a336fd42       b6a454c5a800d       34 seconds ago           Running             kube-controller-manager   2                   9a5fc45d42c0a       kube-controller-manager-functional-233546
	e5eb6664340a4       d8e673e7c9983       34 seconds ago           Running             kube-scheduler            2                   795bf3b68273a       kube-scheduler-functional-233546
	197df4b827f4c       a9e7e6b294baf       34 seconds ago           Running             etcd                      2                   40376fdcdf4f5       etcd-functional-233546
	facb218d99873       c69fa2e9cbf5f       36 seconds ago           Running             coredns                   2                   357791f81da5d       coredns-668d6bf9bc-j5tfb
	e4345a0980955       a9e7e6b294baf       About a minute ago       Exited              etcd                      1                   40376fdcdf4f5       etcd-functional-233546
	2652a6574b833       b6a454c5a800d       About a minute ago       Exited              kube-controller-manager   1                   9a5fc45d42c0a       kube-controller-manager-functional-233546
	7ba8dc3a6e3d9       d8e673e7c9983       About a minute ago       Exited              kube-scheduler            1                   795bf3b68273a       kube-scheduler-functional-233546
	a512d3610ad3a       c69fa2e9cbf5f       2 minutes ago            Exited              coredns                   1                   357791f81da5d       coredns-668d6bf9bc-j5tfb
	83ac8f474aa83       f1332858868e1       2 minutes ago            Exited              kube-proxy                1                   36e46290dc1a8       kube-proxy-5r4lm
	
	
	==> containerd <==
	Apr 07 12:16:11 functional-233546 containerd[3705]: time="2025-04-07T12:16:11.806072261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-4xpfh,Uid:2a78a4ff-016a-48f2-a823-a55d7246439a,Namespace:kubernetes-dashboard,Attempt:0,}"
	Apr 07 12:16:11 functional-233546 containerd[3705]: time="2025-04-07T12:16:11.816875558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-ccc5d,Uid:b30f4bda-f591-4591-a991-b90b12032927,Namespace:kubernetes-dashboard,Attempt:0,}"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.226187153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.226661693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.228066492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.229248874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.266993440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280025363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280044287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.280447688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.404443227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-ccc5d,Uid:b30f4bda-f591-4591-a991-b90b12032927,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"bd96ee207ed4c3b7eb0b9a99ded736e5fd8f80d187f0b3e1b2a0915e213057cb\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.466348417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-4xpfh,Uid:2a78a4ff-016a-48f2-a823-a55d7246439a,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"ab1f840778bed65fedeccb0c3e30493a5fe29e57a58030459f1f314af1cd9d63\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.754751656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/echoserver:1.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.758373392Z" level=info msg="stop pulling image registry.k8s.io/echoserver:1.8: active requests=0, bytes read=46245285"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.760759527Z" level=info msg="ImageCreate event name:\"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.765477480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.768202714Z" level=info msg="Pulled image \"registry.k8s.io/echoserver:1.8\" with image id \"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\", repo tag \"registry.k8s.io/echoserver:1.8\", repo digest \"registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969\", size \"46237695\" in 3.276297307s"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.768426718Z" level=info msg="PullImage \"registry.k8s.io/echoserver:1.8\" returns image reference \"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.772605851Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.775012434Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.775796039Z" level=info msg="CreateContainer within sandbox \"540ed9b3608acdb05335fe8e1afa688c2c024b253a4731a94caf925a352b8005\" for container &ContainerMetadata{Name:echoserver,Attempt:0,}"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.813622961Z" level=info msg="CreateContainer within sandbox \"540ed9b3608acdb05335fe8e1afa688c2c024b253a4731a94caf925a352b8005\" for &ContainerMetadata{Name:echoserver,Attempt:0,} returns container id \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.819630184Z" level=info msg="StartContainer for \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\""
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.870235718Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 07 12:16:12 functional-233546 containerd[3705]: time="2025-04-07T12:16:12.911092911Z" level=info msg="StartContainer for \"95ae599649ebd0986cedb91de209321dc44a13371b920a819833aa9fd074776c\" returns successfully"
	
	
	==> coredns [a512d3610ad3ad6b3ab2a1771124b35ffc00f38b1eb9739ef140de0a74a4d675] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1773096416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:27.625) (total time: 10002ms):
	Trace[1773096416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:14:37.627)
	Trace[1773096416]: [10.002043099s] [10.002043099s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[886648764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:27.865) (total time: 10001ms):
	Trace[886648764]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:14:37.866)
	Trace[886648764]: [10.001464323s] [10.001464323s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[469826125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 12:14:30.028) (total time: 10000ms):
	Trace[469826125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (12:14:40.029)
	Trace[469826125]: [10.000964923s] [10.000964923s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [facb218d99873973466b8da9c18ee84ae539517990cbaa2389637a0a0b1984a9] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47978 - 50391 "HINFO IN 1078490451081813515.6603114991866582823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.040409887s
	
	
	==> describe nodes <==
	Name:               functional-233546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-233546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=33e6edc58d2014d70e908473920ef4ac8eae1e43
	                    minikube.k8s.io/name=functional-233546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_13_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-233546
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:16:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:15:41 +0000   Mon, 07 Apr 2025 12:13:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:15:41 +0000   Mon, 07 Apr 2025 12:13:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:15:41 +0000   Mon, 07 Apr 2025 12:13:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:15:41 +0000   Mon, 07 Apr 2025 12:13:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.145
	  Hostname:    functional-233546
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 9630c455a5b54b73bacb3c6e2a6d0899
	  System UUID:                9630c455-a5b5-4b73-bacb-3c6e2a6d0899
	  Boot ID:                    7e602861-bc79-4aa5-8bfb-369379362f94
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.23
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-fcfd88b6f-2mtcl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 coredns-668d6bf9bc-j5tfb                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m50s
	  kube-system                 etcd-functional-233546                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m54s
	  kube-system                 kube-apiserver-functional-233546              250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-functional-233546     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-proxy-5r4lm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  kube-system                 kube-scheduler-functional-233546              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-ccc5d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-4xpfh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m48s                kube-proxy       
	  Normal  Starting                 30s                  kube-proxy       
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 2m55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m54s                kubelet          Node functional-233546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s                kubelet          Node functional-233546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s                kubelet          Node functional-233546 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m54s                kubelet          Node functional-233546 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  2m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                node-controller  Node functional-233546 event: Registered Node functional-233546 in Controller
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node functional-233546 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node functional-233546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node functional-233546 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node functional-233546 event: Registered Node functional-233546 in Controller
	  Normal  Starting                 35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)    kubelet          Node functional-233546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)    kubelet          Node functional-233546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)    kubelet          Node functional-233546 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                  node-controller  Node functional-233546 event: Registered Node functional-233546 in Controller
	
	
	==> dmesg <==
	[  +0.155774] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.321402] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
	[  +1.562144] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.082169] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.752665] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.149438] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.943721] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[ +19.100643] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.969124] kauditd_printk_skb: 11 callbacks suppressed
	[Apr 7 12:15] systemd-fstab-generator[3268]: Ignoring "noauto" option for root device
	[ +10.979846] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.079581] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077138] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.180331] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.148412] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
	[  +0.316451] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
	[  +1.522538] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[ +10.804641] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.268945] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.750639] systemd-fstab-generator[4321]: Ignoring "noauto" option for root device
	[  +4.272161] kauditd_printk_skb: 44 callbacks suppressed
	[  +8.908541] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.037133] systemd-fstab-generator[4804]: Ignoring "noauto" option for root device
	[Apr 7 12:16] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.486120] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [197df4b827f4c18b1644b12631adb1571365b5095b41e977e6335528c27fb3f9] <==
	{"level":"info","ts":"2025-04-07T12:15:39.412298Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:15:39.412355Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:15:39.412364Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:15:39.412627Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2025-04-07T12:15:39.412655Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2025-04-07T12:15:39.413552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 switched to configuration voters=(4950477381744769801)"}
	{"level":"info","ts":"2025-04-07T12:15:39.413631Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","added-peer-id":"44b3a0f32f80bb09","added-peer-peer-urls":["https://192.168.39.145:2380"]}
	{"level":"info","ts":"2025-04-07T12:15:39.414099Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"33ee9922f2bf4379","local-member-id":"44b3a0f32f80bb09","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:15:39.414195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:15:40.648479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-07T12:15:40.648607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-07T12:15:40.648696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2025-04-07T12:15:40.648749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 4"}
	{"level":"info","ts":"2025-04-07T12:15:40.648769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2025-04-07T12:15:40.648825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 4"}
	{"level":"info","ts":"2025-04-07T12:15:40.648882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 4"}
	{"level":"info","ts":"2025-04-07T12:15:40.653899Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:functional-233546 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:15:40.653900Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:15:40.654209Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:15:40.654242Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:15:40.653976Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:15:40.655003Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:15:40.655628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	{"level":"info","ts":"2025-04-07T12:15:40.655005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:15:40.656401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e4345a09809559fece6a2907582b5baf7697f8e6b4df1c5f37cc46f547216c2b] <==
	{"level":"info","ts":"2025-04-07T12:14:43.973379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T12:14:43.973546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgPreVoteResp from 44b3a0f32f80bb09 at term 2"}
	{"level":"info","ts":"2025-04-07T12:14:43.973656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T12:14:43.973692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 received MsgVoteResp from 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2025-04-07T12:14:43.973812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"44b3a0f32f80bb09 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T12:14:43.973908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 44b3a0f32f80bb09 elected leader 44b3a0f32f80bb09 at term 3"}
	{"level":"info","ts":"2025-04-07T12:14:43.980098Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"44b3a0f32f80bb09","local-member-attributes":"{Name:functional-233546 ClientURLs:[https://192.168.39.145:2379]}","request-path":"/0/members/44b3a0f32f80bb09/attributes","cluster-id":"33ee9922f2bf4379","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:14:43.980273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:14:43.980674Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:14:43.981057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:14:43.981164Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:14:43.981737Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:14:43.981882Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:14:43.982611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:14:43.982944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.145:2379"}
	{"level":"info","ts":"2025-04-07T12:15:31.292110Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-07T12:15:31.292279Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-233546","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	{"level":"warn","ts":"2025-04-07T12:15:31.292372Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T12:15:31.292418Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T12:15:31.293959Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-07T12:15:31.293986Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.145:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-07T12:15:31.294021Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"44b3a0f32f80bb09","current-leader-member-id":"44b3a0f32f80bb09"}
	{"level":"info","ts":"2025-04-07T12:15:31.297307Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2025-04-07T12:15:31.297465Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.145:2380"}
	{"level":"info","ts":"2025-04-07T12:15:31.297477Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-233546","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.145:2380"],"advertise-client-urls":["https://192.168.39.145:2379"]}
	
	
	==> kernel <==
	 12:16:13 up 3 min,  0 users,  load average: 0.85, 0.39, 0.15
	Linux functional-233546 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [eb4b247cd1c35a9cb133f39aa18078cd0fdd0eb8cb6abf9e8b2bb467bdfb14a0] <==
	I0407 12:15:41.861936       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 12:15:41.867257       1 aggregator.go:171] initial CRD sync complete...
	I0407 12:15:41.867382       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 12:15:41.867471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 12:15:41.867555       1 cache.go:39] Caches are synced for autoregister controller
	I0407 12:15:41.867898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 12:15:41.881472       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 12:15:41.894613       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 12:15:41.896970       1 policy_source.go:240] refreshing policies
	I0407 12:15:41.967562       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 12:15:42.493627       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:15:42.771979       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0407 12:15:43.181303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.145]
	I0407 12:15:43.182859       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 12:15:43.195612       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:15:43.711106       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:15:43.742644       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 12:15:43.766740       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 12:15:43.775435       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 12:15:51.406804       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 12:16:04.471200       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.17.122"}
	I0407 12:16:08.934405       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.125.231"}
	I0407 12:16:11.163659       1 controller.go:615] quota admission added evaluator for: namespaces
	I0407 12:16:11.608536       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.216.199"}
	I0407 12:16:11.666242       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.46.84"}
	
	
	==> kube-controller-manager [2652a6574b833b7c944b3ec7af899a37b1b146af97c909e84a55c61a00761b3d] <==
	I0407 12:14:48.291362       1 shared_informer.go:320] Caches are synced for GC
	I0407 12:14:48.291984       1 shared_informer.go:320] Caches are synced for PV protection
	I0407 12:14:48.295646       1 shared_informer.go:320] Caches are synced for namespace
	I0407 12:14:48.296064       1 shared_informer.go:320] Caches are synced for resource quota
	I0407 12:14:48.297830       1 shared_informer.go:320] Caches are synced for job
	I0407 12:14:48.299915       1 shared_informer.go:320] Caches are synced for PVC protection
	I0407 12:14:48.302076       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0407 12:14:48.303769       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0407 12:14:48.305451       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0407 12:14:48.305865       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0407 12:14:48.311697       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0407 12:14:48.315035       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 12:14:48.315353       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0407 12:14:48.315430       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0407 12:14:48.316901       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0407 12:14:48.317154       1 shared_informer.go:320] Caches are synced for endpoint
	I0407 12:14:48.319214       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0407 12:14:48.389864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 12:14:48.389905       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0407 12:14:48.389912       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0407 12:14:48.406428       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 12:14:48.750504       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="444.415803ms"
	I0407 12:14:48.750799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="104.246µs"
	I0407 12:15:06.437924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.023522ms"
	I0407 12:15:06.439413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.807µs"
	
	
	==> kube-controller-manager [2bff6a336fd42295929b35cad337011d7045cde43f055de621513e49af53c6b8] <==
	I0407 12:16:08.910309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="41.673µs"
	I0407 12:16:11.372953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="96.325388ms"
	E0407 12:16:11.372981       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.381452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="78.433824ms"
	E0407 12:16:11.381505       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.399195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.654995ms"
	E0407 12:16:11.400522       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.402557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.822831ms"
	E0407 12:16:11.402808       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.418362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="14.484461ms"
	E0407 12:16:11.418561       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.418771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="14.814738ms"
	E0407 12:16:11.418788       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.439437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="18.060389ms"
	E0407 12:16:11.439465       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:16:11.488001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="67.919753ms"
	I0407 12:16:11.515815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="72.771074ms"
	I0407 12:16:11.532577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="44.538381ms"
	I0407 12:16:11.532638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="38.49µs"
	I0407 12:16:11.540199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="23.106878ms"
	I0407 12:16:11.540747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="24.271µs"
	I0407 12:16:11.562389       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="26.897µs"
	I0407 12:16:11.589955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="232.701µs"
	I0407 12:16:13.760921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="16.087519ms"
	I0407 12:16:13.762360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="26.226µs"
	
	
	==> kube-proxy [83ac8f474aa83fecaf8b8fe842ce71da4ebef34dece8588eb38ed457875766e2] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:14:10.388037       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
	E0407 12:14:11.544447       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
	E0407 12:14:13.613937       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
	E0407 12:14:18.275283       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": dial tcp 192.168.39.145:8441: connect: connection refused"
	E0407 12:14:37.783491       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-233546\": net/http: TLS handshake timeout"
	I0407 12:14:55.870634       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0407 12:14:55.870978       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:14:55.933223       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:14:55.933279       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:14:55.933327       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:14:55.935870       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:14:55.936375       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:14:55.936402       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:14:55.938308       1 config.go:199] "Starting service config controller"
	I0407 12:14:55.938345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:14:55.938544       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:14:55.938679       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:14:55.939306       1 config.go:329] "Starting node config controller"
	I0407 12:14:55.939336       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:14:56.039186       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:14:56.039207       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:14:56.039546       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b898d99e3b3aee5f757bec32bb79c332b238d51a04141a850105b9d32fa9c806] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 12:15:43.204279       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 12:15:43.212458       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.145"]
	E0407 12:15:43.214596       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:15:43.250465       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 12:15:43.250509       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 12:15:43.250562       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:15:43.253488       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:15:43.254355       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:15:43.254810       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:15:43.256475       1 config.go:199] "Starting service config controller"
	I0407 12:15:43.256604       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:15:43.256711       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:15:43.256789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:15:43.257355       1 config.go:329] "Starting node config controller"
	I0407 12:15:43.257484       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:15:43.356835       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:15:43.356857       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:15:43.358412       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7ba8dc3a6e3d950457e13fa1de82fb9af24b1d8c4d472ac6ee19468481ed7704] <==
	I0407 12:14:43.229503       1 serving.go:386] Generated self-signed cert in-memory
	W0407 12:14:45.132004       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 12:14:45.132236       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 12:14:45.132370       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 12:14:45.132491       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 12:14:45.196357       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 12:14:45.196973       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:14:45.199386       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 12:14:45.200823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 12:14:45.200874       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 12:14:45.215272       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:14:45.315667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:15:31.430755       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0407 12:15:31.430807       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0407 12:15:31.430904       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e5eb6664340a46da962e89d1f288f990dd273a7ad114b57946ab211c85a13e31] <==
	I0407 12:15:39.745192       1 serving.go:386] Generated self-signed cert in-memory
	I0407 12:15:41.900823       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 12:15:41.900860       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:15:41.906065       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0407 12:15:41.906101       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0407 12:15:41.906173       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 12:15:41.906311       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:15:41.906479       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0407 12:15:41.906551       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0407 12:15:41.906721       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 12:15:41.906866       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 12:15:42.006964       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0407 12:15:42.007242       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:15:42.007418       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 07 12:15:44 functional-233546 kubelet[4328]: I0407 12:15:44.519642    4328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0583eb23f45b569f2d8f32705a3ca5a" path="/var/lib/kubelet/pods/a0583eb23f45b569f2d8f32705a3ca5a/volumes"
	Apr 07 12:15:58 functional-233546 kubelet[4328]: I0407 12:15:58.517559    4328 scope.go:117] "RemoveContainer" containerID="4df8c896b48531f0b62efc15f398a8514a51465d56e4e7f1fa868a1175e38bd3"
	Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.451226    4328 memory_manager.go:355] "RemoveStaleState removing state" podUID="a0583eb23f45b569f2d8f32705a3ca5a" containerName="kube-apiserver"
	Apr 07 12:16:04 functional-233546 kubelet[4328]: W0407 12:16:04.454103    4328 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-233546" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'functional-233546' and this object
	Apr 07 12:16:04 functional-233546 kubelet[4328]: E0407 12:16:04.454213    4328 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-233546\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'functional-233546' and this object" logger="UnhandledError"
	Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.454263    4328 status_manager.go:890] "Failed to get status for pod" podUID="29c269d3-92fc-4c73-92af-9513fa556724" pod="default/invalid-svc" err="pods \"invalid-svc\" is forbidden: User \"system:node:functional-233546\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'functional-233546' and this object"
	Apr 07 12:16:04 functional-233546 kubelet[4328]: I0407 12:16:04.539540    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") pod \"invalid-svc\" (UID: \"29c269d3-92fc-4c73-92af-9513fa556724\") " pod="default/invalid-svc"
	Apr 07 12:16:05 functional-233546 kubelet[4328]: I0407 12:16:05.397815    4328 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961229    4328 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
	Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961301    4328 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" image="nonexistingimage:latest"
	Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.961445    4328 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:nonexistingimage:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4hpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod invalid-svc_d
efault(29c269d3-92fc-4c73-92af-9513fa556724): ErrImagePull: failed to pull and unpack image \"docker.io/library/nonexistingimage:latest\": failed to resolve reference \"docker.io/library/nonexistingimage:latest\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" logger="UnhandledError"
	Apr 07 12:16:05 functional-233546 kubelet[4328]: E0407 12:16:05.963556    4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="29c269d3-92fc-4c73-92af-9513fa556724"
	Apr 07 12:16:06 functional-233546 kubelet[4328]: E0407 12:16:06.695947    4328 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nonexistingimage:latest\\\": failed to resolve reference \\\"docker.io/library/nonexistingimage:latest\\\": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed\"" pod="default/invalid-svc" podUID="29c269d3-92fc-4c73-92af-9513fa556724"
	Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.066708    4328 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") pod \"29c269d3-92fc-4c73-92af-9513fa556724\" (UID: \"29c269d3-92fc-4c73-92af-9513fa556724\") "
	Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.069336    4328 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg" (OuterVolumeSpecName: "kube-api-access-k4hpg") pod "29c269d3-92fc-4c73-92af-9513fa556724" (UID: "29c269d3-92fc-4c73-92af-9513fa556724"). InnerVolumeSpecName "kube-api-access-k4hpg". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.167284    4328 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4hpg\" (UniqueName: \"kubernetes.io/projected/29c269d3-92fc-4c73-92af-9513fa556724-kube-api-access-k4hpg\") on node \"functional-233546\" DevicePath \"\""
	Apr 07 12:16:08 functional-233546 kubelet[4328]: I0407 12:16:08.871904    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv864\" (UniqueName: \"kubernetes.io/projected/b7195cd8-f289-4379-844f-9bb4e80bf697-kube-api-access-mv864\") pod \"hello-node-fcfd88b6f-2mtcl\" (UID: \"b7195cd8-f289-4379-844f-9bb4e80bf697\") " pod="default/hello-node-fcfd88b6f-2mtcl"
	Apr 07 12:16:10 functional-233546 kubelet[4328]: I0407 12:16:10.519192    4328 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c269d3-92fc-4c73-92af-9513fa556724" path="/var/lib/kubelet/pods/29c269d3-92fc-4c73-92af-9513fa556724/volumes"
	Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594022    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2a78a4ff-016a-48f2-a823-a55d7246439a-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-4xpfh\" (UID: \"2a78a4ff-016a-48f2-a823-a55d7246439a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-4xpfh"
	Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594060    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b30f4bda-f591-4591-a991-b90b12032927-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-ccc5d\" (UID: \"b30f4bda-f591-4591-a991-b90b12032927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-ccc5d"
	Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594081    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82llr\" (UniqueName: \"kubernetes.io/projected/2a78a4ff-016a-48f2-a823-a55d7246439a-kube-api-access-82llr\") pod \"kubernetes-dashboard-7779f9b69b-4xpfh\" (UID: \"2a78a4ff-016a-48f2-a823-a55d7246439a\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-4xpfh"
	Apr 07 12:16:11 functional-233546 kubelet[4328]: I0407 12:16:11.594098    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28r8c\" (UniqueName: \"kubernetes.io/projected/b30f4bda-f591-4591-a991-b90b12032927-kube-api-access-28r8c\") pod \"dashboard-metrics-scraper-5d59dccf9b-ccc5d\" (UID: \"b30f4bda-f591-4591-a991-b90b12032927\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-ccc5d"
	Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.611043    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/9aa5358a-1712-4b82-b3a2-6dc42c0336d6-test-volume\") pod \"busybox-mount\" (UID: \"9aa5358a-1712-4b82-b3a2-6dc42c0336d6\") " pod="default/busybox-mount"
	Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.611101    4328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvpq\" (UniqueName: \"kubernetes.io/projected/9aa5358a-1712-4b82-b3a2-6dc42c0336d6-kube-api-access-rwvpq\") pod \"busybox-mount\" (UID: \"9aa5358a-1712-4b82-b3a2-6dc42c0336d6\") " pod="default/busybox-mount"
	Apr 07 12:16:13 functional-233546 kubelet[4328]: I0407 12:16:13.746406    4328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-fcfd88b6f-2mtcl" podStartSLOduration=2.463927751 podStartE2EDuration="5.746387883s" podCreationTimestamp="2025-04-07 12:16:08 +0000 UTC" firstStartedPulling="2025-04-07 12:16:09.488296522 +0000 UTC m=+31.124098196" lastFinishedPulling="2025-04-07 12:16:12.770756655 +0000 UTC m=+34.406558328" observedRunningTime="2025-04-07 12:16:13.746023269 +0000 UTC m=+35.381824946" watchObservedRunningTime="2025-04-07 12:16:13.746387883 +0000 UTC m=+35.382189561"
	
	
	==> storage-provisioner [3426251924490aaabb73e7a36a34b1110436bf9525d16b588c4ed29b12c0a4eb] <==
	I0407 12:15:58.722622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:15:58.730053       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:15:58.730290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [4df8c896b48531f0b62efc15f398a8514a51465d56e4e7f1fa868a1175e38bd3] <==
	I0407 12:15:42.941091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0407 12:15:42.946655       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-233546 -n functional-233546
helpers_test.go:261: (dbg) Run:  kubectl --context functional-233546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh: exit status 1 (169.376223ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-233546/192.168.39.145
	Start Time:       Mon, 07 Apr 2025 12:16:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rwvpq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rwvpq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/busybox-mount to functional-233546
	  Normal  Pulling    0s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-ccc5d" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-4xpfh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-233546 describe pod busybox-mount dashboard-metrics-scraper-5d59dccf9b-ccc5d kubernetes-dashboard-7779f9b69b-4xpfh: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.24s)

                                                
                                    

Test pass (289/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.89
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 5.16
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.14
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 86.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 214.15
29 TestAddons/serial/Volcano 38.94
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.5
35 TestAddons/parallel/Registry 16.48
36 TestAddons/parallel/Ingress 18.13
37 TestAddons/parallel/InspektorGadget 12.08
38 TestAddons/parallel/MetricsServer 6.76
40 TestAddons/parallel/CSI 59.73
41 TestAddons/parallel/Headlamp 12.34
42 TestAddons/parallel/CloudSpanner 6.82
43 TestAddons/parallel/LocalPath 9.26
44 TestAddons/parallel/NvidiaDevicePlugin 6.82
45 TestAddons/parallel/Yakd 11.99
47 TestAddons/StoppedEnableDisable 91.16
48 TestCertOptions 70.64
49 TestCertExpiration 272.97
51 TestForceSystemdFlag 47.82
52 TestForceSystemdEnv 64.79
54 TestKVMDriverInstallOrUpdate 1.3
58 TestErrorSpam/setup 44.93
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.75
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.73
63 TestErrorSpam/stop 4.57
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.01
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 70.29
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.76
75 TestFunctional/serial/CacheCmd/cache/add_local 0.95
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 44.75
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.4
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.45
89 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.17
93 TestFunctional/parallel/StatusCmd 1.02
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 33.41
101 TestFunctional/parallel/SSHCmd 0.49
102 TestFunctional/parallel/CpCmd 1.56
103 TestFunctional/parallel/MySQL 24.77
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.46
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
113 TestFunctional/parallel/License 0.16
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
127 TestFunctional/parallel/MountCmd/any-port 11.64
128 TestFunctional/parallel/Version/short 0.05
129 TestFunctional/parallel/Version/components 0.57
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
134 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
135 TestFunctional/parallel/ImageCommands/Setup 0.45
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.99
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
138 TestFunctional/parallel/ServiceCmd/List 0.44
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.66
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.84
142 TestFunctional/parallel/ServiceCmd/Format 0.37
143 TestFunctional/parallel/ServiceCmd/URL 0.33
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.47
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
150 TestFunctional/parallel/MountCmd/specific-port 1.67
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.83
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 181.6
161 TestMultiControlPlane/serial/DeployApp 5.11
162 TestMultiControlPlane/serial/PingHostFromPods 1.19
163 TestMultiControlPlane/serial/AddWorkerNode 54.65
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.91
166 TestMultiControlPlane/serial/CopyFile 13.16
167 TestMultiControlPlane/serial/StopSecondaryNode 91.31
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
169 TestMultiControlPlane/serial/RestartSecondaryNode 43.33
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 455.48
172 TestMultiControlPlane/serial/DeleteSecondaryNode 6.93
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
174 TestMultiControlPlane/serial/StopCluster 182.8
175 TestMultiControlPlane/serial/RestartCluster 163.02
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 70.53
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 60.73
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.72
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.67
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.48
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 98.46
214 TestMountStart/serial/StartWithMountFirst 28.04
215 TestMountStart/serial/VerifyMountFirst 0.4
216 TestMountStart/serial/StartWithMountSecond 30.12
217 TestMountStart/serial/VerifyMountSecond 0.4
218 TestMountStart/serial/DeleteFirst 0.61
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 1.28
221 TestMountStart/serial/RestartStopped 23.36
222 TestMountStart/serial/VerifyMountPostStop 0.39
225 TestMultiNode/serial/FreshStart2Nodes 109
226 TestMultiNode/serial/DeployApp2Nodes 4.32
227 TestMultiNode/serial/PingHostFrom2Pods 0.81
228 TestMultiNode/serial/AddNode 53.53
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.61
231 TestMultiNode/serial/CopyFile 7.46
232 TestMultiNode/serial/StopNode 2.18
233 TestMultiNode/serial/StartAfterStop 34.18
234 TestMultiNode/serial/RestartKeepsNodes 309.88
235 TestMultiNode/serial/DeleteNode 2.12
236 TestMultiNode/serial/StopMultiNode 181.82
237 TestMultiNode/serial/RestartMultiNode 107.76
238 TestMultiNode/serial/ValidateNameConflict 44.35
243 TestPreload 252.36
245 TestScheduledStopUnix 115.45
249 TestRunningBinaryUpgrade 194.24
251 TestKubernetesUpgrade 190.11
261 TestStartStop/group/old-k8s-version/serial/FirstStart 186.31
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 97.84
265 TestNoKubernetes/serial/StartWithStopK8s 52.14
266 TestNoKubernetes/serial/Start 27.29
274 TestNetworkPlugins/group/false 3.5
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
279 TestNoKubernetes/serial/ProfileList 1.61
280 TestNoKubernetes/serial/Stop 1.31
281 TestNoKubernetes/serial/StartNoArgs 39.33
282 TestStartStop/group/old-k8s-version/serial/DeployApp 9.57
283 TestStoppedBinaryUpgrade/Setup 0.4
284 TestStoppedBinaryUpgrade/Upgrade 135.67
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
286 TestStartStop/group/old-k8s-version/serial/Stop 90.91
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
289 TestPause/serial/Start 110.29
290 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
291 TestStartStop/group/old-k8s-version/serial/SecondStart 186.08
292 TestStoppedBinaryUpgrade/MinikubeLogs 2.5
293 TestPause/serial/SecondStartNoReconfiguration 48.99
295 TestStartStop/group/embed-certs/serial/FirstStart 85.64
297 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.43
298 TestPause/serial/Pause 0.77
299 TestPause/serial/VerifyStatus 0.27
300 TestPause/serial/Unpause 0.75
301 TestPause/serial/PauseAgain 0.92
302 TestPause/serial/DeletePaused 0.74
303 TestPause/serial/VerifyDeletedResources 1.54
305 TestStartStop/group/no-preload/serial/FirstStart 70.09
306 TestStartStop/group/embed-certs/serial/DeployApp 10.35
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
308 TestStartStop/group/embed-certs/serial/Stop 91.46
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
310 TestStartStop/group/no-preload/serial/DeployApp 9.29
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.31
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
314 TestStartStop/group/no-preload/serial/Stop 90.97
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
318 TestStartStop/group/old-k8s-version/serial/Pause 2.48
320 TestStartStop/group/newest-cni/serial/FirstStart 51.29
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.66
322 TestStartStop/group/embed-certs/serial/SecondStart 314.42
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
325 TestStartStop/group/newest-cni/serial/Stop 2.32
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/newest-cni/serial/SecondStart 38.7
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 321.46
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
331 TestStartStop/group/no-preload/serial/SecondStart 321.54
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
335 TestStartStop/group/newest-cni/serial/Pause 2.71
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/embed-certs/serial/Pause 3.02
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.5
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
344 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.48
345 TestStartStop/group/no-preload/serial/Pause 3.4
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.73
348 TestNetworkPlugins/group/auto/Start 72.39
349 TestNetworkPlugins/group/kindnet/Start 104.52
350 TestNetworkPlugins/group/calico/Start 138.81
351 TestNetworkPlugins/group/custom-flannel/Start 124.56
352 TestNetworkPlugins/group/auto/KubeletFlags 0.37
353 TestNetworkPlugins/group/auto/NetCatPod 11.28
354 TestNetworkPlugins/group/auto/DNS 0.15
355 TestNetworkPlugins/group/auto/Localhost 0.11
356 TestNetworkPlugins/group/auto/HairPin 0.12
357 TestNetworkPlugins/group/enable-default-cni/Start 70.22
358 TestNetworkPlugins/group/kindnet/ControllerPod 6
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
360 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
361 TestNetworkPlugins/group/kindnet/DNS 0.17
362 TestNetworkPlugins/group/kindnet/Localhost 0.15
363 TestNetworkPlugins/group/kindnet/HairPin 0.15
364 TestNetworkPlugins/group/flannel/Start 75.32
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.23
367 TestNetworkPlugins/group/calico/NetCatPod 10.35
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
370 TestNetworkPlugins/group/calico/DNS 0.24
371 TestNetworkPlugins/group/calico/Localhost 0.15
372 TestNetworkPlugins/group/calico/HairPin 0.15
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
378 TestNetworkPlugins/group/bridge/Start 87.96
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
384 TestNetworkPlugins/group/flannel/NetCatPod 9.23
385 TestNetworkPlugins/group/flannel/DNS 0.13
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.12
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
389 TestNetworkPlugins/group/bridge/NetCatPod 9.2
390 TestNetworkPlugins/group/bridge/DNS 0.14
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (6.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-428607 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-428607 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.889634727s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:04:25.796644 1243895 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0407 12:04:25.796740 1243895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-428607
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-428607: exit status 85 (62.066024ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-428607 | jenkins | v1.35.0 | 07 Apr 25 12:04 UTC |          |
	|         | -p download-only-428607        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:04:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:04:18.949670 1243907 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:04:18.949936 1243907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:04:18.949945 1243907 out.go:358] Setting ErrFile to fd 2...
	I0407 12:04:18.949950 1243907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:04:18.950193 1243907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	W0407 12:04:18.950343 1243907 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20602-1236688/.minikube/config/config.json: open /home/jenkins/minikube-integration/20602-1236688/.minikube/config/config.json: no such file or directory
	I0407 12:04:18.950936 1243907 out.go:352] Setting JSON to true
	I0407 12:04:18.951917 1243907 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28005,"bootTime":1743999454,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:04:18.952033 1243907 start.go:139] virtualization: kvm guest
	I0407 12:04:18.954856 1243907 out.go:97] [download-only-428607] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:04:18.955000 1243907 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:04:18.955056 1243907 notify.go:220] Checking for updates...
	I0407 12:04:18.956886 1243907 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:04:18.958490 1243907 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:04:18.959884 1243907 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 12:04:18.961286 1243907 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 12:04:18.962679 1243907 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:04:18.965508 1243907 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:04:18.965828 1243907 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:04:19.005437 1243907 out.go:97] Using the kvm2 driver based on user configuration
	I0407 12:04:19.005485 1243907 start.go:297] selected driver: kvm2
	I0407 12:04:19.005492 1243907 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:04:19.005840 1243907 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:04:19.005943 1243907 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1236688/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:04:19.023006 1243907 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:04:19.023070 1243907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:04:19.023634 1243907 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:04:19.023778 1243907 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:04:19.023815 1243907 cni.go:84] Creating CNI manager for ""
	I0407 12:04:19.023865 1243907 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0407 12:04:19.023874 1243907 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:04:19.023928 1243907 start.go:340] cluster config:
	{Name:download-only-428607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-428607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:04:19.024099 1243907 iso.go:125] acquiring lock: {Name:mke34e95ff2d5c7d5f541233d231d308303bffa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:04:19.026146 1243907 out.go:97] Downloading VM boot image ...
	I0407 12:04:19.026183 1243907 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0407 12:04:21.701117 1243907 out.go:97] Starting "download-only-428607" primary control-plane node in "download-only-428607" cluster
	I0407 12:04:21.701153 1243907 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 12:04:21.721503 1243907 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0407 12:04:21.721537 1243907 cache.go:56] Caching tarball of preloaded images
	I0407 12:04:21.721721 1243907 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0407 12:04:21.723552 1243907 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:04:21.723579 1243907 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0407 12:04:21.742663 1243907 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-428607 host does not exist
	  To start a cluster, run: "minikube start -p download-only-428607"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-428607
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103033 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103033 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.161143273s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:04:31.286622 1243895 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0407 12:04:31.286669 1243895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103033
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103033: exit status 85 (63.368497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-428607 | jenkins | v1.35.0 | 07 Apr 25 12:04 UTC |                     |
	|         | -p download-only-428607        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:04 UTC | 07 Apr 25 12:04 UTC |
	| delete  | -p download-only-428607        | download-only-428607 | jenkins | v1.35.0 | 07 Apr 25 12:04 UTC | 07 Apr 25 12:04 UTC |
	| start   | -o=json --download-only        | download-only-103033 | jenkins | v1.35.0 | 07 Apr 25 12:04 UTC |                     |
	|         | -p download-only-103033        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:04:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:04:26.166950 1244091 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:04:26.167218 1244091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:04:26.167229 1244091 out.go:358] Setting ErrFile to fd 2...
	I0407 12:04:26.167235 1244091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:04:26.167420 1244091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:04:26.168009 1244091 out.go:352] Setting JSON to true
	I0407 12:04:26.168959 1244091 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28012,"bootTime":1743999454,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:04:26.169072 1244091 start.go:139] virtualization: kvm guest
	I0407 12:04:26.171037 1244091 out.go:97] [download-only-103033] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:04:26.171211 1244091 notify.go:220] Checking for updates...
	I0407 12:04:26.172503 1244091 out.go:169] MINIKUBE_LOCATION=20602
	I0407 12:04:26.173960 1244091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:04:26.175235 1244091 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 12:04:26.176396 1244091 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 12:04:26.177660 1244091 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:04:26.179948 1244091 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:04:26.180156 1244091 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:04:26.212267 1244091 out.go:97] Using the kvm2 driver based on user configuration
	I0407 12:04:26.212305 1244091 start.go:297] selected driver: kvm2
	I0407 12:04:26.212314 1244091 start.go:901] validating driver "kvm2" against <nil>
	I0407 12:04:26.212634 1244091 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:04:26.212745 1244091 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20602-1236688/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0407 12:04:26.228474 1244091 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0407 12:04:26.228531 1244091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:04:26.229088 1244091 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0407 12:04:26.229236 1244091 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:04:26.229274 1244091 cni.go:84] Creating CNI manager for ""
	I0407 12:04:26.229331 1244091 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0407 12:04:26.229342 1244091 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:04:26.229399 1244091 start.go:340] cluster config:
	{Name:download-only-103033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-103033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:04:26.229508 1244091 iso.go:125] acquiring lock: {Name:mke34e95ff2d5c7d5f541233d231d308303bffa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:04:26.231504 1244091 out.go:97] Starting "download-only-103033" primary control-plane node in "download-only-103033" cluster
	I0407 12:04:26.231524 1244091 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0407 12:04:26.248434 1244091 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0407 12:04:26.248455 1244091 cache.go:56] Caching tarball of preloaded images
	I0407 12:04:26.248588 1244091 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0407 12:04:26.250317 1244091 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0407 12:04:26.250342 1244091 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0407 12:04:26.279697 1244091 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:17ec4d97c92604221650726c3857ee2a -> /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0407 12:04:29.889183 1244091 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0407 12:04:29.889298 1244091 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0407 12:04:30.671362 1244091 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0407 12:04:30.671710 1244091 profile.go:143] Saving config to /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/download-only-103033/config.json ...
	I0407 12:04:30.671759 1244091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/download-only-103033/config.json: {Name:mkb1a13731b79d92c7849b7775159245029963c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:04:30.671933 1244091 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0407 12:04:30.672082 1244091 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20602-1236688/.minikube/cache/linux/amd64/v1.32.2/kubectl
	
	
	* The control-plane node download-only-103033 host does not exist
	  To start a cluster, run: "minikube start -p download-only-103033"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-103033
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:04:31.881019 1243895 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-421440 --alsologtostderr --binary-mirror http://127.0.0.1:44001 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-421440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-421440
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (86.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-067516 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-067516 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m25.458152666s)
helpers_test.go:175: Cleaning up "offline-containerd-067516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-067516
--- PASS: TestOffline (86.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-160798
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-160798: exit status 85 (54.696204ms)

                                                
                                                
-- stdout --
	* Profile "addons-160798" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-160798"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-160798
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-160798: exit status 85 (55.199046ms)

                                                
                                                
-- stdout --
	* Profile "addons-160798" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-160798"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (214.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-160798 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-160798 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m34.146805141s)
--- PASS: TestAddons/Setup (214.15s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 28.264662ms
addons_test.go:807: volcano-scheduler stabilized in 28.808557ms
addons_test.go:815: volcano-admission stabilized in 28.888078ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-6tf7h" [7f191ba6-45cc-4a11-b045-02905b943ae2] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004641597s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-sz5q9" [83802928-ec9a-43b2-8971-f209a4c62e0e] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003725673s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-p6czt" [1ea013ad-f9a1-4f7f-b60a-88878c9bb806] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004383797s
addons_test.go:842: (dbg) Run:  kubectl --context addons-160798 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-160798 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-160798 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7bfb389f-3340-4edc-902d-757bdcda7f9e] Pending
helpers_test.go:344: "test-job-nginx-0" [7bfb389f-3340-4edc-902d-757bdcda7f9e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7bfb389f-3340-4edc-902d-757bdcda7f9e] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.005990833s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable volcano --alsologtostderr -v=1: (11.483255276s)
--- PASS: TestAddons/serial/Volcano (38.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-160798 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-160798 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-160798 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-160798 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce38df10-c5df-4df9-b2d0-0c95338a1fe9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce38df10-c5df-4df9-b2d0-0c95338a1fe9] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003863953s
addons_test.go:633: (dbg) Run:  kubectl --context addons-160798 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-160798 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-160798 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.189019ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-dd2ll" [d1f46cae-c754-45ad-ad47-12977318bd53] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003543378s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kmbsf" [ee4c6981-ac74-48fb-9353-d9f38aba9aeb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005157791s
addons_test.go:331: (dbg) Run:  kubectl --context addons-160798 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-160798 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-160798 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.679844589s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 ip
2025/04/07 12:09:20 [DEBUG] GET http://192.168.39.214:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-160798 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-160798 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-160798 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a57eecb6-0760-4bbb-bdfa-91ab0ba10199] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a57eecb6-0760-4bbb-bdfa-91ab0ba10199] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.010558256s
I0407 12:09:31.959815 1243895 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-160798 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.214
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable ingress-dns --alsologtostderr -v=1: (1.106281503s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable ingress --alsologtostderr -v=1: (7.762050859s)
--- PASS: TestAddons/parallel/Ingress (18.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2b96n" [4fd69f2d-5637-4238-b3d5-3431b71a3726] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00409686s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable inspektor-gadget --alsologtostderr -v=1: (6.0709971s)
--- PASS: TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.922882ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5s48v" [35c82e3c-5429-41be-9bbe-29a7f0a19c9e] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004744419s
addons_test.go:402: (dbg) Run:  kubectl --context addons-160798 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:09:11.134951 1243895 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:09:11.153640 1243895 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:09:11.153681 1243895 kapi.go:107] duration metric: took 18.744043ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 18.757797ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-160798 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-160798 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4f838615-d6f5-4062-ba25-794f462889b4] Pending
helpers_test.go:344: "task-pv-pod" [4f838615-d6f5-4062-ba25-794f462889b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4f838615-d6f5-4062-ba25-794f462889b4] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003960417s
addons_test.go:511: (dbg) Run:  kubectl --context addons-160798 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-160798 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-160798 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-160798 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-160798 delete pod task-pv-pod: (1.242932749s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-160798 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-160798 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-160798 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [204aab8c-ed37-461f-93c7-5368879f60d8] Pending
helpers_test.go:344: "task-pv-pod-restore" [204aab8c-ed37-461f-93c7-5368879f60d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [204aab8c-ed37-461f-93c7-5368879f60d8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004059369s
addons_test.go:553: (dbg) Run:  kubectl --context addons-160798 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-160798 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-160798 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.718481139s)
--- PASS: TestAddons/parallel/CSI (59.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-160798 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-160798 --alsologtostderr -v=1: (1.080742621s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-9nc5k" [31e75776-8ee8-4f40-9ddb-38a817803749] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-9nc5k" [31e75776-8ee8-4f40-9ddb-38a817803749] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-9nc5k" [31e75776-8ee8-4f40-9ddb-38a817803749] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004958274s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-hzkd5" [4d2954ef-0e69-4aed-9b54-ab87581a0129] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003348724s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-160798 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-160798 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c9ecd7b6-9880-458e-989b-df63393fb918] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c9ecd7b6-9880-458e-989b-df63393fb918] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c9ecd7b6-9880-458e-989b-df63393fb918] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004943151s
addons_test.go:906: (dbg) Run:  kubectl --context addons-160798 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 ssh "cat /opt/local-path-provisioner/pvc-6e1caf9c-7924-4991-aab0-87cf6630d9a1_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-160798 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-160798 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hrbzz" [4790c727-f01f-431d-b514-beff760647be] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004071156s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-b9rg4" [94917c66-e81d-499a-8083-dbe27f7e55de] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004050596s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-160798 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-160798 addons disable yakd --alsologtostderr -v=1: (5.98675392s)
--- PASS: TestAddons/parallel/Yakd (11.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-160798
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-160798: (1m30.85005636s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-160798
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-160798
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-160798
--- PASS: TestAddons/StoppedEnableDisable (91.16s)

                                                
                                    
x
+
TestCertOptions (70.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-241447 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0407 13:17:16.575943 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-241447 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m9.486396244s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-241447 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-241447 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-241447 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-241447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-241447
--- PASS: TestCertOptions (70.64s)

                                                
                                    
x
+
TestCertExpiration (272.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-659957 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0407 13:13:06.718121 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-659957 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m1.019046188s)
E0407 13:15:54.636833 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.643312 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.654835 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.676284 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.717805 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.799319 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:54.960700 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:55.282576 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:55.923970 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:57.206211 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:59.768139 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:04.889939 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:08.963445 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:15.131810 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:35.614034 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-659957 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-659957 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (30.985811271s)
helpers_test.go:175: Cleaning up "cert-expiration-659957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-659957
--- PASS: TestCertExpiration (272.97s)

                                                
                                    
x
+
TestForceSystemdFlag (47.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-612987 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-612987 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (46.933980603s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-612987 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-612987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-612987
--- PASS: TestForceSystemdFlag (47.82s)

                                                
                                    
x
+
TestForceSystemdEnv (64.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-696417 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-696417 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m3.59410534s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-696417 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-696417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-696417
--- PASS: TestForceSystemdEnv (64.79s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0407 13:05:30.170191 1243895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:05:30.170407 1243895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0407 13:05:30.206923 1243895 install.go:62] docker-machine-driver-kvm2: exit status 1
W0407 13:05:30.207203 1243895 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:05:30.207278 1243895 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate106247599/001/docker-machine-driver-kvm2
I0407 13:05:30.327637 1243895 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate106247599/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000013098 gz:0xc000013120 tar:0xc0000130d0 tar.bz2:0xc0000130e0 tar.gz:0xc0000130f0 tar.xz:0xc000013100 tar.zst:0xc000013110 tbz2:0xc0000130e0 tgz:0xc0000130f0 txz:0xc000013100 tzst:0xc000013110 xz:0xc000013128 zip:0xc000013140 zst:0xc000013150] Getters:map[file:0xc001a4d4d0 http:0xc0007e1b80 https:0xc0007e1bd0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:05:30.327695 1243895 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate106247599/001/docker-machine-driver-kvm2
I0407 13:05:30.950550 1243895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:05:30.950670 1243895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0407 13:05:30.985148 1243895 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0407 13:05:30.985182 1243895 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0407 13:05:30.985270 1243895 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:05:30.985299 1243895 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate106247599/002/docker-machine-driver-kvm2
I0407 13:05:31.010474 1243895 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate106247599/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000013098 gz:0xc000013120 tar:0xc0000130d0 tar.bz2:0xc0000130e0 tar.gz:0xc0000130f0 tar.xz:0xc000013100 tar.zst:0xc000013110 tbz2:0xc0000130e0 tgz:0xc0000130f0 txz:0xc000013100 tzst:0xc000013110 xz:0xc000013128 zip:0xc000013140 zst:0xc000013150] Getters:map[file:0xc0019db590 http:0xc0018cc190 https:0xc0018cc1e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0407 13:05:31.010536 1243895 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate106247599/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.30s)

                                                
                                    
x
+
TestErrorSpam/setup (44.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-551109 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-551109 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-551109 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-551109 --driver=kvm2  --container-runtime=containerd: (44.934451749s)
--- PASS: TestErrorSpam/setup (44.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (4.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop: (1.362641204s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop: (1.603451598s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-551109 --log_dir /tmp/nospam-551109 stop: (1.604257245s)
--- PASS: TestErrorSpam/stop (4.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20602-1236688/.minikube/files/etc/test/nested/copy/1243895/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0407 12:13:06.719207 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:06.725620 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:06.736988 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:06.758548 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:06.800047 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:06.881535 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:07.043207 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:07.365007 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:08.006754 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:09.288379 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:11.850454 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:16.972792 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:27.215114 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:13:47.697185 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-233546 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m23.011372224s)
--- PASS: TestFunctional/serial/StartWithProxy (83.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (70.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:14:00.415686 1243895 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --alsologtostderr -v=8
E0407 12:14:28.659695 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-233546 --alsologtostderr -v=8: (1m10.289610354s)
functional_test.go:680: soft start took 1m10.29038298s for "functional-233546" cluster.
I0407 12:15:10.705674 1243895 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (70.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-233546 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-233546 /tmp/TestFunctionalserialCacheCmdcacheadd_local608002032/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache add minikube-local-cache-test:functional-233546
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache delete minikube-local-cache-test:functional-233546
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-233546
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.961929ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 kubectl -- --context functional-233546 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-233546 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:15:50.584343 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-233546 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.749131379s)
functional_test.go:778: restart took 44.749278436s for "functional-233546" cluster.
I0407 12:16:01.467553 1243895 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (44.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-233546 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 logs: (1.396272967s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 logs --file /tmp/TestFunctionalserialLogsFileCmd4046284251/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 logs --file /tmp/TestFunctionalserialLogsFileCmd4046284251/001/logs.txt: (1.364566938s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-233546 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-233546
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-233546: exit status 115 (278.668302ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.145:30592 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-233546 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 config get cpus: exit status 14 (87.640211ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 config get cpus: exit status 14 (55.439061ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-233546 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.41197ms)

                                                
                                                
-- stdout --
	* [functional-233546] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:16:09.369361 1250580 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:16:09.369578 1250580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.369603 1250580 out.go:358] Setting ErrFile to fd 2...
	I0407 12:16:09.369616 1250580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.369843 1250580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:16:09.370423 1250580 out.go:352] Setting JSON to false
	I0407 12:16:09.371524 1250580 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28715,"bootTime":1743999454,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:16:09.371667 1250580 start.go:139] virtualization: kvm guest
	I0407 12:16:09.374200 1250580 out.go:177] * [functional-233546] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:16:09.375652 1250580 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:16:09.376450 1250580 notify.go:220] Checking for updates...
	I0407 12:16:09.378335 1250580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:16:09.379553 1250580 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 12:16:09.380717 1250580 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 12:16:09.381800 1250580 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:16:09.383367 1250580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:16:09.385058 1250580 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:16:09.385527 1250580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.385619 1250580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.404063 1250580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0407 12:16:09.404637 1250580 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.405283 1250580 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.405309 1250580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.405712 1250580 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.405923 1250580 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.406268 1250580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:16:09.406709 1250580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.406775 1250580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.423382 1250580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0407 12:16:09.423847 1250580 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.424310 1250580 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.424345 1250580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.424798 1250580 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.425005 1250580 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.463942 1250580 out.go:177] * Using the kvm2 driver based on existing profile
	I0407 12:16:09.465406 1250580 start.go:297] selected driver: kvm2
	I0407 12:16:09.465429 1250580 start.go:901] validating driver "kvm2" against &{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:16:09.465590 1250580 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:16:09.467941 1250580 out.go:201] 
	W0407 12:16:09.469170 1250580 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 12:16:09.470354 1250580 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233546 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-233546 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (167.284537ms)

                                                
                                                
-- stdout --
	* [functional-233546] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:16:09.209879 1250523 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:16:09.209983 1250523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.209995 1250523 out.go:358] Setting ErrFile to fd 2...
	I0407 12:16:09.210001 1250523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:16:09.210385 1250523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:16:09.211060 1250523 out.go:352] Setting JSON to false
	I0407 12:16:09.212461 1250523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":28715,"bootTime":1743999454,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:16:09.212565 1250523 start.go:139] virtualization: kvm guest
	I0407 12:16:09.215168 1250523 out.go:177] * [functional-233546] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 12:16:09.216874 1250523 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 12:16:09.216936 1250523 notify.go:220] Checking for updates...
	I0407 12:16:09.219141 1250523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:16:09.220349 1250523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 12:16:09.221584 1250523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 12:16:09.222715 1250523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:16:09.224248 1250523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:16:09.226040 1250523 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:16:09.226687 1250523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.226771 1250523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.245158 1250523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0407 12:16:09.245661 1250523 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.246171 1250523 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.246192 1250523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.246722 1250523 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.246954 1250523 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.247282 1250523 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:16:09.247755 1250523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:16:09.247818 1250523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:16:09.266654 1250523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0407 12:16:09.267188 1250523 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:16:09.267713 1250523 main.go:141] libmachine: Using API Version  1
	I0407 12:16:09.267734 1250523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:16:09.268183 1250523 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:16:09.268595 1250523 main.go:141] libmachine: (functional-233546) Calling .DriverName
	I0407 12:16:09.305489 1250523 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0407 12:16:09.306807 1250523 start.go:297] selected driver: kvm2
	I0407 12:16:09.306825 1250523 start.go:901] validating driver "kvm2" against &{Name:functional-233546 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:functional-233546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.145 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:16:09.306945 1250523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:16:09.309131 1250523 out.go:201] 
	W0407 12:16:09.310300 1250523 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:16:09.311547 1250523 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-233546 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-233546 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-tdmhm" [952bcf78-50f3-4246-ae40-908f2452f337] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-tdmhm" [952bcf78-50f3-4246-ae40-908f2452f337] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005120912s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.145:32255
functional_test.go:1692: http://192.168.39.145:32255: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-tdmhm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.145:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.145:32255
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [989674a7-f6cb-4467-9b7f-ab88459776d4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023162748s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-233546 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-233546 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-233546 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-233546 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-233546 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5acca186-9e73-44d5-b0ca-10f8c0677edc] Pending
helpers_test.go:344: "sp-pod" [5acca186-9e73-44d5-b0ca-10f8c0677edc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5acca186-9e73-44d5-b0ca-10f8c0677edc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004566057s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-233546 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-233546 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-233546 delete -f testdata/storage-provisioner/pod.yaml: (1.312400797s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-233546 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb7b4eb0-ebee-4672-8174-a15203d6c94c] Pending
helpers_test.go:344: "sp-pod" [cb7b4eb0-ebee-4672-8174-a15203d6c94c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb7b4eb0-ebee-4672-8174-a15203d6c94c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.010371377s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-233546 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.41s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh -n functional-233546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cp functional-233546:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4024997346/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh -n functional-233546 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh -n functional-233546 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-233546 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-grsg7" [f6b14f48-81fa-461d-8716-8c1afa566229] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-grsg7" [f6b14f48-81fa-461d-8716-8c1afa566229] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.002694623s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;": exit status 1 (143.725354ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:16:42.893793 1243895 retry.go:31] will retry after 1.006348637s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;": exit status 1 (115.055941ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:16:44.016426 1243895 retry.go:31] will retry after 1.864726593s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;": exit status 1 (120.412673ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:16:46.002963 1243895 retry.go:31] will retry after 3.183695021s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-233546 exec mysql-58ccfd96bb-grsg7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1243895/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /etc/test/nested/copy/1243895/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1243895.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /etc/ssl/certs/1243895.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1243895.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /usr/share/ca-certificates/1243895.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/12438952.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /etc/ssl/certs/12438952.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/12438952.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /usr/share/ca-certificates/12438952.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-233546 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo systemctl is-active docker"
I0407 12:16:15.131340 1243895 retry.go:31] will retry after 1.980190678s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:5db4a2f7-30c5-4c81-82d4-f64ee732c358 ResourceVersion:786 Generation:0 CreationTimestamp:2025-04-07 12:16:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a4cf10 VolumeMode:0xc001a4cf20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh "sudo systemctl is-active docker": exit status 1 (229.985317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh "sudo systemctl is-active crio": exit status 1 (213.539284ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-233546 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-233546 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-2mtcl" [b7195cd8-f289-4379-844f-9bb4e80bf697] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-2mtcl" [b7195cd8-f289-4379-844f-9bb4e80bf697] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.014688794s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "365.958878ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "54.215458ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "324.316277ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "56.442776ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdany-port3835906352/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744028172128234385" to /tmp/TestFunctionalparallelMountCmdany-port3835906352/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744028172128234385" to /tmp/TestFunctionalparallelMountCmdany-port3835906352/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744028172128234385" to /tmp/TestFunctionalparallelMountCmdany-port3835906352/001/test-1744028172128234385
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.394656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:16:12.376996 1243895 retry.go:31] will retry after 321.445408ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 12:16 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 12:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 12:16 test-1744028172128234385
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh cat /mount-9p/test-1744028172128234385
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-233546 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9aa5358a-1712-4b82-b3a2-6dc42c0336d6] Pending
helpers_test.go:344: "busybox-mount" [9aa5358a-1712-4b82-b3a2-6dc42c0336d6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9aa5358a-1712-4b82-b3a2-6dc42c0336d6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9aa5358a-1712-4b82-b3a2-6dc42c0336d6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.00395604s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-233546 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdany-port3835906352/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233546 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-233546
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kicbase/echo-server:functional-233546
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233546 image ls --format short --alsologtostderr:
I0407 12:16:27.230384 1252804 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:27.230503 1252804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.230509 1252804 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:27.230514 1252804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.230761 1252804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:27.231336 1252804 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.231433 1252804 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.231846 1252804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.231911 1252804 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.248224 1252804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
I0407 12:16:27.248724 1252804 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.249320 1252804 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.249352 1252804 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.249847 1252804 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.250102 1252804 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:27.252398 1252804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.252449 1252804 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.268702 1252804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
I0407 12:16:27.269186 1252804 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.269650 1252804 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.269694 1252804 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.270052 1252804 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.270244 1252804 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:27.270470 1252804 ssh_runner.go:195] Run: systemctl --version
I0407 12:16:27.270499 1252804 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:27.273983 1252804 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.274517 1252804 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:27.274557 1252804 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.274743 1252804 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:27.274941 1252804 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:27.275122 1252804 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:27.275282 1252804 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:27.369032 1252804 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:16:27.416396 1252804 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.416421 1252804 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.416765 1252804 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.416794 1252804 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:27.416804 1252804 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.416836 1252804 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.416896 1252804 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:27.417118 1252804 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.417135 1252804 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:27.417134 1252804 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233546 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-233546  | sha256:b47ec5 | 991B   |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:b6a454 | 26.3MB |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:d8e673 | 20.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:85b7a1 | 28.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:f13328 | 30.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/nginx                     | latest             | sha256:53a18e | 72.2MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kicbase/echo-server               | functional-233546  | sha256:9056ab | 2.37MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233546 image ls --format table --alsologtostderr:
I0407 12:16:27.741101 1252851 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:27.741391 1252851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.741402 1252851 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:27.741406 1252851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.741602 1252851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:27.742167 1252851 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.742280 1252851 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.742658 1252851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.742715 1252851 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.758691 1252851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35989
I0407 12:16:27.759201 1252851 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.759778 1252851 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.759805 1252851 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.760169 1252851 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.760345 1252851 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:27.762048 1252851 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.762091 1252851 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.778048 1252851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
I0407 12:16:27.778553 1252851 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.779030 1252851 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.779053 1252851 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.779371 1252851 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.779543 1252851 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:27.779730 1252851 ssh_runner.go:195] Run: systemctl --version
I0407 12:16:27.779757 1252851 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:27.782394 1252851 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.782813 1252851 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:27.782847 1252851 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.783006 1252851 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:27.783173 1252851 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:27.783377 1252851 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:27.783513 1252851 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:27.872636 1252851 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:16:27.925256 1252851 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.925281 1252851 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.925580 1252851 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.925600 1252851 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:27.925608 1252851 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.925615 1252851 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.925879 1252851 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.925888 1252851 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:27.925900 1252851 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233546 image ls --format json --alsologtostderr:
[{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"28670731"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{
"id":"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"20657902"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:b6a454c5a800d201daa
cead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"26259392"},{"id":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"30907858"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-233546"],"size":"2372971"},{"id":"sha256:b47ec5c8e37267d8b618d8c4e84229e9763faedda34c6994478fb877f0042208","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional
-233546"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:53a18edff8091d5faff1e42b4d885bc5
f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19"],"repoTags":["docker.io/library/nginx:latest"],"size":"72180980"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233546 image ls --format json --alsologtostderr:
I0407 12:16:27.508956 1252827 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:27.509090 1252827 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.509103 1252827 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:27.509111 1252827 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.509306 1252827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:27.509919 1252827 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.510022 1252827 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.510403 1252827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.510470 1252827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.527281 1252827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
I0407 12:16:27.527978 1252827 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.528646 1252827 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.528672 1252827 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.529088 1252827 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.529339 1252827 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:27.531163 1252827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.531209 1252827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.547994 1252827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
I0407 12:16:27.548472 1252827 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.548959 1252827 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.548995 1252827 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.549358 1252827 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.549569 1252827 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:27.549818 1252827 ssh_runner.go:195] Run: systemctl --version
I0407 12:16:27.549848 1252827 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:27.553111 1252827 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.553583 1252827 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:27.553614 1252827 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:27.553780 1252827 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:27.553939 1252827 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:27.554101 1252827 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:27.554290 1252827 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:27.638437 1252827 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:16:27.688697 1252827 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.688722 1252827 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.689047 1252827 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.689079 1252827 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:27.689086 1252827 main.go:141] libmachine: Making call to close driver server
I0407 12:16:27.689085 1252827 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:27.689101 1252827 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:27.689355 1252827 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:27.689410 1252827 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:27.689384 1252827 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233546 image ls --format yaml --alsologtostderr:
- id: sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "20657902"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-233546
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:b47ec5c8e37267d8b618d8c4e84229e9763faedda34c6994478fb877f0042208
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-233546
size: "991"
- id: sha256:53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
repoTags:
- docker.io/library/nginx:latest
size: "72180980"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "28670731"
- id: sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "26259392"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "30907858"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233546 image ls --format yaml --alsologtostderr:
I0407 12:16:27.980125 1252875 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:27.980257 1252875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.980268 1252875 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:27.980273 1252875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:27.980493 1252875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:27.981065 1252875 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.981163 1252875 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:27.981505 1252875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:27.981574 1252875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:27.997743 1252875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
I0407 12:16:27.998303 1252875 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:27.998819 1252875 main.go:141] libmachine: Using API Version  1
I0407 12:16:27.998845 1252875 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:27.999220 1252875 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:27.999412 1252875 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:28.001530 1252875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:28.001596 1252875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:28.016984 1252875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
I0407 12:16:28.017412 1252875 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:28.017868 1252875 main.go:141] libmachine: Using API Version  1
I0407 12:16:28.017892 1252875 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:28.018247 1252875 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:28.018449 1252875 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:28.018641 1252875 ssh_runner.go:195] Run: systemctl --version
I0407 12:16:28.018671 1252875 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:28.020973 1252875 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:28.021415 1252875 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:28.021454 1252875 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:28.021540 1252875 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:28.021704 1252875 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:28.021875 1252875 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:28.021991 1252875 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:28.105724 1252875 ssh_runner.go:195] Run: sudo crictl images --output json
I0407 12:16:28.154600 1252875 main.go:141] libmachine: Making call to close driver server
I0407 12:16:28.154614 1252875 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:28.154908 1252875 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:28.154934 1252875 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:28.154940 1252875 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:28.154979 1252875 main.go:141] libmachine: Making call to close driver server
I0407 12:16:28.154986 1252875 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:28.155236 1252875 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:28.155282 1252875 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:28.155262 1252875 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh pgrep buildkitd: exit status 1 (201.1948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image build -t localhost/my-image:functional-233546 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 image build -t localhost/my-image:functional-233546 testdata/build --alsologtostderr: (3.485675237s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233546 image build -t localhost/my-image:functional-233546 testdata/build --alsologtostderr:
I0407 12:16:28.415270 1252944 out.go:345] Setting OutFile to fd 1 ...
I0407 12:16:28.415391 1252944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:28.415401 1252944 out.go:358] Setting ErrFile to fd 2...
I0407 12:16:28.415408 1252944 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:16:28.415680 1252944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
I0407 12:16:28.416532 1252944 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:28.417268 1252944 config.go:182] Loaded profile config "functional-233546": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0407 12:16:28.417630 1252944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:28.417672 1252944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:28.434267 1252944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
I0407 12:16:28.434726 1252944 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:28.435411 1252944 main.go:141] libmachine: Using API Version  1
I0407 12:16:28.435441 1252944 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:28.435796 1252944 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:28.435993 1252944 main.go:141] libmachine: (functional-233546) Calling .GetState
I0407 12:16:28.437765 1252944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0407 12:16:28.437820 1252944 main.go:141] libmachine: Launching plugin server for driver kvm2
I0407 12:16:28.454064 1252944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42221
I0407 12:16:28.454515 1252944 main.go:141] libmachine: () Calling .GetVersion
I0407 12:16:28.455030 1252944 main.go:141] libmachine: Using API Version  1
I0407 12:16:28.455059 1252944 main.go:141] libmachine: () Calling .SetConfigRaw
I0407 12:16:28.455420 1252944 main.go:141] libmachine: () Calling .GetMachineName
I0407 12:16:28.455630 1252944 main.go:141] libmachine: (functional-233546) Calling .DriverName
I0407 12:16:28.455838 1252944 ssh_runner.go:195] Run: systemctl --version
I0407 12:16:28.455868 1252944 main.go:141] libmachine: (functional-233546) Calling .GetSSHHostname
I0407 12:16:28.458879 1252944 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:28.459298 1252944 main.go:141] libmachine: (functional-233546) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:83:b5", ip: ""} in network mk-functional-233546: {Iface:virbr1 ExpiryTime:2025-04-07 13:12:51 +0000 UTC Type:0 Mac:52:54:00:cf:83:b5 Iaid: IPaddr:192.168.39.145 Prefix:24 Hostname:functional-233546 Clientid:01:52:54:00:cf:83:b5}
I0407 12:16:28.459334 1252944 main.go:141] libmachine: (functional-233546) DBG | domain functional-233546 has defined IP address 192.168.39.145 and MAC address 52:54:00:cf:83:b5 in network mk-functional-233546
I0407 12:16:28.459465 1252944 main.go:141] libmachine: (functional-233546) Calling .GetSSHPort
I0407 12:16:28.459639 1252944 main.go:141] libmachine: (functional-233546) Calling .GetSSHKeyPath
I0407 12:16:28.459808 1252944 main.go:141] libmachine: (functional-233546) Calling .GetSSHUsername
I0407 12:16:28.459980 1252944 sshutil.go:53] new ssh client: &{IP:192.168.39.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/functional-233546/id_rsa Username:docker}
I0407 12:16:28.541454 1252944 build_images.go:161] Building image from path: /tmp/build.2658175266.tar
I0407 12:16:28.541555 1252944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:16:28.553165 1252944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2658175266.tar
I0407 12:16:28.560843 1252944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2658175266.tar: stat -c "%s %y" /var/lib/minikube/build/build.2658175266.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2658175266.tar': No such file or directory
I0407 12:16:28.560884 1252944 ssh_runner.go:362] scp /tmp/build.2658175266.tar --> /var/lib/minikube/build/build.2658175266.tar (3072 bytes)
I0407 12:16:28.586642 1252944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2658175266
I0407 12:16:28.595866 1252944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2658175266 -xf /var/lib/minikube/build/build.2658175266.tar
I0407 12:16:28.605046 1252944 containerd.go:394] Building image: /var/lib/minikube/build/build.2658175266
I0407 12:16:28.605106 1252944 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2658175266 --local dockerfile=/var/lib/minikube/build/build.2658175266 --output type=image,name=localhost/my-image:functional-233546
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:1a3c1affea37d3d901dccf311827581c8548a86615c7e5110420787c70160ae8
#8 exporting manifest sha256:1a3c1affea37d3d901dccf311827581c8548a86615c7e5110420787c70160ae8 0.0s done
#8 exporting config sha256:a27ab3e918d546627729585b86c4191c4e723d19d235085cf4d89ed31c598e62 0.0s done
#8 naming to localhost/my-image:functional-233546 done
#8 DONE 0.2s
I0407 12:16:31.781997 1252944 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2658175266 --local dockerfile=/var/lib/minikube/build/build.2658175266 --output type=image,name=localhost/my-image:functional-233546: (3.17685433s)
I0407 12:16:31.782091 1252944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2658175266
I0407 12:16:31.819773 1252944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2658175266.tar
I0407 12:16:31.840471 1252944 build_images.go:217] Built localhost/my-image:functional-233546 from /tmp/build.2658175266.tar
I0407 12:16:31.840524 1252944 build_images.go:133] succeeded building to: functional-233546
I0407 12:16:31.840531 1252944 build_images.go:134] failed building to: 
I0407 12:16:31.840564 1252944 main.go:141] libmachine: Making call to close driver server
I0407 12:16:31.840577 1252944 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:31.840893 1252944 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:31.840912 1252944 main.go:141] libmachine: Making call to close connection to plugin binary
I0407 12:16:31.840920 1252944 main.go:141] libmachine: Making call to close driver server
I0407 12:16:31.840927 1252944 main.go:141] libmachine: (functional-233546) Calling .Close
I0407 12:16:31.840932 1252944 main.go:141] libmachine: (functional-233546) DBG | Closing plugin on server side
I0407 12:16:31.841260 1252944 main.go:141] libmachine: Successfully made call to close driver server
I0407 12:16:31.841306 1252944 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-233546
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr: (1.747095956s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr: (1.156689979s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-233546
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p functional-233546 image load --daemon kicbase/echo-server:functional-233546 --alsologtostderr: (2.21282895s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service list -o json
functional_test.go:1511: Took "457.786005ms" to run "out/minikube-linux-amd64 -p functional-233546 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.145:30108
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.145:30108
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image save kicbase/echo-server:functional-233546 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image rm kicbase/echo-server:functional-233546 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdspecific-port1171163381/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.694206ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:16:24.018309 1243895 retry.go:31] will retry after 333.996438ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdspecific-port1171163381/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233546 ssh "sudo umount -f /mount-9p": exit status 1 (214.611316ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-233546 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdspecific-port1171163381/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-233546
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 image save --daemon kicbase/echo-server:functional-233546 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-233546
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233546 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-233546 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233546 /tmp/TestFunctionalparallelMountCmdVerifyCleanup321482997/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.83s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-233546
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-233546
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-233546
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (181.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0407 12:18:06.718463 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:18:34.425895 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-929265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m0.910138719s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (181.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-929265 -- rollout status deployment/busybox: (2.895395836s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-mjlsh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ngqbt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ztxbt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-mjlsh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ngqbt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ztxbt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-mjlsh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ngqbt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ztxbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-mjlsh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-mjlsh -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ngqbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ngqbt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ztxbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-929265 -- exec busybox-58667487b6-ztxbt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-929265 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-929265 -v=7 --alsologtostderr: (53.772521982s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-929265 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp testdata/cp-test.txt ha-929265:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2550544262/001/cp-test_ha-929265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265:/home/docker/cp-test.txt ha-929265-m02:/home/docker/cp-test_ha-929265_ha-929265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test_ha-929265_ha-929265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265:/home/docker/cp-test.txt ha-929265-m03:/home/docker/cp-test_ha-929265_ha-929265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test_ha-929265_ha-929265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265:/home/docker/cp-test.txt ha-929265-m04:/home/docker/cp-test_ha-929265_ha-929265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test_ha-929265_ha-929265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp testdata/cp-test.txt ha-929265-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2550544262/001/cp-test_ha-929265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m02:/home/docker/cp-test.txt ha-929265:/home/docker/cp-test_ha-929265-m02_ha-929265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test_ha-929265-m02_ha-929265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m02:/home/docker/cp-test.txt ha-929265-m03:/home/docker/cp-test_ha-929265-m02_ha-929265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test_ha-929265-m02_ha-929265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m02:/home/docker/cp-test.txt ha-929265-m04:/home/docker/cp-test_ha-929265-m02_ha-929265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test_ha-929265-m02_ha-929265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp testdata/cp-test.txt ha-929265-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2550544262/001/cp-test_ha-929265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m03:/home/docker/cp-test.txt ha-929265:/home/docker/cp-test_ha-929265-m03_ha-929265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test_ha-929265-m03_ha-929265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m03:/home/docker/cp-test.txt ha-929265-m02:/home/docker/cp-test_ha-929265-m03_ha-929265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test_ha-929265-m03_ha-929265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m03:/home/docker/cp-test.txt ha-929265-m04:/home/docker/cp-test_ha-929265-m03_ha-929265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test_ha-929265-m03_ha-929265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp testdata/cp-test.txt ha-929265-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2550544262/001/cp-test_ha-929265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m04:/home/docker/cp-test.txt ha-929265:/home/docker/cp-test_ha-929265-m04_ha-929265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265 "sudo cat /home/docker/cp-test_ha-929265-m04_ha-929265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m04:/home/docker/cp-test.txt ha-929265-m02:/home/docker/cp-test_ha-929265-m04_ha-929265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m02 "sudo cat /home/docker/cp-test_ha-929265-m04_ha-929265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 cp ha-929265-m04:/home/docker/cp-test.txt ha-929265-m03:/home/docker/cp-test_ha-929265-m04_ha-929265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 ssh -n ha-929265-m03 "sudo cat /home/docker/cp-test_ha-929265-m04_ha-929265-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 node stop m02 -v=7 --alsologtostderr
E0407 12:21:08.962549 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:08.968964 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:08.980363 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:09.001823 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:09.043266 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:09.124754 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:09.286249 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:09.607957 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:10.250068 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:11.532237 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:14.094240 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:19.216342 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:29.458689 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:21:49.940495 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:22:30.901926 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-929265 node stop m02 -v=7 --alsologtostderr: (1m30.664373292s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr: exit status 7 (646.504543ms)

                                                
                                                
-- stdout --
	ha-929265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-929265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:22:37.681275 1257990 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:22:37.681411 1257990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:22:37.681420 1257990 out.go:358] Setting ErrFile to fd 2...
	I0407 12:22:37.681424 1257990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:22:37.681742 1257990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:22:37.681915 1257990 out.go:352] Setting JSON to false
	I0407 12:22:37.681948 1257990 mustload.go:65] Loading cluster: ha-929265
	I0407 12:22:37.682030 1257990 notify.go:220] Checking for updates...
	I0407 12:22:37.682382 1257990 config.go:182] Loaded profile config "ha-929265": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:22:37.682428 1257990 status.go:174] checking status of ha-929265 ...
	I0407 12:22:37.682985 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.683041 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.702029 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0407 12:22:37.702498 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.703036 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.703067 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.703448 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.703657 1257990 main.go:141] libmachine: (ha-929265) Calling .GetState
	I0407 12:22:37.705278 1257990 status.go:371] ha-929265 host status = "Running" (err=<nil>)
	I0407 12:22:37.705296 1257990 host.go:66] Checking if "ha-929265" exists ...
	I0407 12:22:37.705649 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.705703 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.720538 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I0407 12:22:37.721059 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.721654 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.721678 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.722072 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.722293 1257990 main.go:141] libmachine: (ha-929265) Calling .GetIP
	I0407 12:22:37.725143 1257990 main.go:141] libmachine: (ha-929265) DBG | domain ha-929265 has defined MAC address 52:54:00:75:b4:73 in network mk-ha-929265
	I0407 12:22:37.725647 1257990 main.go:141] libmachine: (ha-929265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b4:73", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:17:04 +0000 UTC Type:0 Mac:52:54:00:75:b4:73 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-929265 Clientid:01:52:54:00:75:b4:73}
	I0407 12:22:37.725686 1257990 main.go:141] libmachine: (ha-929265) DBG | domain ha-929265 has defined IP address 192.168.39.44 and MAC address 52:54:00:75:b4:73 in network mk-ha-929265
	I0407 12:22:37.725803 1257990 host.go:66] Checking if "ha-929265" exists ...
	I0407 12:22:37.726075 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.726129 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.742189 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46235
	I0407 12:22:37.742606 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.743049 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.743070 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.743441 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.743665 1257990 main.go:141] libmachine: (ha-929265) Calling .DriverName
	I0407 12:22:37.743890 1257990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:22:37.743926 1257990 main.go:141] libmachine: (ha-929265) Calling .GetSSHHostname
	I0407 12:22:37.747216 1257990 main.go:141] libmachine: (ha-929265) DBG | domain ha-929265 has defined MAC address 52:54:00:75:b4:73 in network mk-ha-929265
	I0407 12:22:37.747727 1257990 main.go:141] libmachine: (ha-929265) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:b4:73", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:17:04 +0000 UTC Type:0 Mac:52:54:00:75:b4:73 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-929265 Clientid:01:52:54:00:75:b4:73}
	I0407 12:22:37.747750 1257990 main.go:141] libmachine: (ha-929265) DBG | domain ha-929265 has defined IP address 192.168.39.44 and MAC address 52:54:00:75:b4:73 in network mk-ha-929265
	I0407 12:22:37.747886 1257990 main.go:141] libmachine: (ha-929265) Calling .GetSSHPort
	I0407 12:22:37.748058 1257990 main.go:141] libmachine: (ha-929265) Calling .GetSSHKeyPath
	I0407 12:22:37.748222 1257990 main.go:141] libmachine: (ha-929265) Calling .GetSSHUsername
	I0407 12:22:37.748451 1257990 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/ha-929265/id_rsa Username:docker}
	I0407 12:22:37.832366 1257990 ssh_runner.go:195] Run: systemctl --version
	I0407 12:22:37.839091 1257990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:22:37.856736 1257990 kubeconfig.go:125] found "ha-929265" server: "https://192.168.39.254:8443"
	I0407 12:22:37.856785 1257990 api_server.go:166] Checking apiserver status ...
	I0407 12:22:37.856829 1257990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:22:37.871589 1257990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0407 12:22:37.880696 1257990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:22:37.880821 1257990 ssh_runner.go:195] Run: ls
	I0407 12:22:37.885331 1257990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 12:22:37.891556 1257990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 12:22:37.891578 1257990 status.go:463] ha-929265 apiserver status = Running (err=<nil>)
	I0407 12:22:37.891588 1257990 status.go:176] ha-929265 status: &{Name:ha-929265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:22:37.891604 1257990 status.go:174] checking status of ha-929265-m02 ...
	I0407 12:22:37.891916 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.891956 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.907660 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34789
	I0407 12:22:37.908277 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.908759 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.908782 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.909193 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.909416 1257990 main.go:141] libmachine: (ha-929265-m02) Calling .GetState
	I0407 12:22:37.910955 1257990 status.go:371] ha-929265-m02 host status = "Stopped" (err=<nil>)
	I0407 12:22:37.910972 1257990 status.go:384] host is not running, skipping remaining checks
	I0407 12:22:37.910979 1257990 status.go:176] ha-929265-m02 status: &{Name:ha-929265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:22:37.910997 1257990 status.go:174] checking status of ha-929265-m03 ...
	I0407 12:22:37.911666 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.911726 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.927985 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33873
	I0407 12:22:37.928420 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.928880 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.928909 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.929327 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.929518 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetState
	I0407 12:22:37.931385 1257990 status.go:371] ha-929265-m03 host status = "Running" (err=<nil>)
	I0407 12:22:37.931405 1257990 host.go:66] Checking if "ha-929265-m03" exists ...
	I0407 12:22:37.931701 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.931759 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.946986 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41765
	I0407 12:22:37.947488 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.947962 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.947983 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.948285 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.948500 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetIP
	I0407 12:22:37.951086 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | domain ha-929265-m03 has defined MAC address 52:54:00:4e:12:27 in network mk-ha-929265
	I0407 12:22:37.951528 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:12:27", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:18:56 +0000 UTC Type:0 Mac:52:54:00:4e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-929265-m03 Clientid:01:52:54:00:4e:12:27}
	I0407 12:22:37.951558 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | domain ha-929265-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:4e:12:27 in network mk-ha-929265
	I0407 12:22:37.951659 1257990 host.go:66] Checking if "ha-929265-m03" exists ...
	I0407 12:22:37.951960 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:37.951996 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:37.967313 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0407 12:22:37.967853 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:37.968285 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:37.968307 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:37.968680 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:37.968882 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .DriverName
	I0407 12:22:37.969074 1257990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:22:37.969094 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetSSHHostname
	I0407 12:22:37.971810 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | domain ha-929265-m03 has defined MAC address 52:54:00:4e:12:27 in network mk-ha-929265
	I0407 12:22:37.972348 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:12:27", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:18:56 +0000 UTC Type:0 Mac:52:54:00:4e:12:27 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:ha-929265-m03 Clientid:01:52:54:00:4e:12:27}
	I0407 12:22:37.972388 1257990 main.go:141] libmachine: (ha-929265-m03) DBG | domain ha-929265-m03 has defined IP address 192.168.39.211 and MAC address 52:54:00:4e:12:27 in network mk-ha-929265
	I0407 12:22:37.972583 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetSSHPort
	I0407 12:22:37.972834 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetSSHKeyPath
	I0407 12:22:37.973020 1257990 main.go:141] libmachine: (ha-929265-m03) Calling .GetSSHUsername
	I0407 12:22:37.973173 1257990 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/ha-929265-m03/id_rsa Username:docker}
	I0407 12:22:38.060082 1257990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:22:38.079007 1257990 kubeconfig.go:125] found "ha-929265" server: "https://192.168.39.254:8443"
	I0407 12:22:38.079036 1257990 api_server.go:166] Checking apiserver status ...
	I0407 12:22:38.079074 1257990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:22:38.092932 1257990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W0407 12:22:38.102170 1257990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:22:38.102239 1257990 ssh_runner.go:195] Run: ls
	I0407 12:22:38.106947 1257990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0407 12:22:38.112114 1257990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0407 12:22:38.112136 1257990 status.go:463] ha-929265-m03 apiserver status = Running (err=<nil>)
	I0407 12:22:38.112146 1257990 status.go:176] ha-929265-m03 status: &{Name:ha-929265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:22:38.112160 1257990 status.go:174] checking status of ha-929265-m04 ...
	I0407 12:22:38.112464 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:38.112508 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:38.129345 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0407 12:22:38.129842 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:38.130378 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:38.130406 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:38.130765 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:38.131015 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetState
	I0407 12:22:38.132572 1257990 status.go:371] ha-929265-m04 host status = "Running" (err=<nil>)
	I0407 12:22:38.132591 1257990 host.go:66] Checking if "ha-929265-m04" exists ...
	I0407 12:22:38.132961 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:38.133032 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:38.149187 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I0407 12:22:38.149623 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:38.150013 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:38.150033 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:38.150396 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:38.150607 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetIP
	I0407 12:22:38.153350 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | domain ha-929265-m04 has defined MAC address 52:54:00:62:9c:c1 in network mk-ha-929265
	I0407 12:22:38.153835 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9c:c1", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:20:13 +0000 UTC Type:0 Mac:52:54:00:62:9c:c1 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-929265-m04 Clientid:01:52:54:00:62:9c:c1}
	I0407 12:22:38.153858 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | domain ha-929265-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:62:9c:c1 in network mk-ha-929265
	I0407 12:22:38.153993 1257990 host.go:66] Checking if "ha-929265-m04" exists ...
	I0407 12:22:38.154332 1257990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:22:38.154409 1257990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:22:38.169996 1257990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0407 12:22:38.170506 1257990 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:22:38.171051 1257990 main.go:141] libmachine: Using API Version  1
	I0407 12:22:38.171070 1257990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:22:38.171385 1257990 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:22:38.171585 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .DriverName
	I0407 12:22:38.171773 1257990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:22:38.171790 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetSSHHostname
	I0407 12:22:38.174635 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | domain ha-929265-m04 has defined MAC address 52:54:00:62:9c:c1 in network mk-ha-929265
	I0407 12:22:38.175053 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:9c:c1", ip: ""} in network mk-ha-929265: {Iface:virbr1 ExpiryTime:2025-04-07 13:20:13 +0000 UTC Type:0 Mac:52:54:00:62:9c:c1 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-929265-m04 Clientid:01:52:54:00:62:9c:c1}
	I0407 12:22:38.175080 1257990 main.go:141] libmachine: (ha-929265-m04) DBG | domain ha-929265-m04 has defined IP address 192.168.39.179 and MAC address 52:54:00:62:9c:c1 in network mk-ha-929265
	I0407 12:22:38.175265 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetSSHPort
	I0407 12:22:38.175432 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetSSHKeyPath
	I0407 12:22:38.175596 1257990 main.go:141] libmachine: (ha-929265-m04) Calling .GetSSHUsername
	I0407 12:22:38.175737 1257990 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/ha-929265-m04/id_rsa Username:docker}
	I0407 12:22:38.256400 1257990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:22:38.275987 1257990 status.go:176] ha-929265-m04 status: &{Name:ha-929265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 node start m02 -v=7 --alsologtostderr
E0407 12:23:06.717865 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-929265 node start m02 -v=7 --alsologtostderr: (42.389291538s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (455.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-929265 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-929265 -v=7 --alsologtostderr
E0407 12:23:52.824095 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:26:08.962869 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:26:36.665546 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-929265 -v=7 --alsologtostderr: (4m33.6883746s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929265 --wait=true -v=7 --alsologtostderr
E0407 12:28:06.717970 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:29:29.787926 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-929265 --wait=true -v=7 --alsologtostderr: (3m1.678126372s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-929265
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (455.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-929265 node delete m03 -v=7 --alsologtostderr: (6.161409111s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (182.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 stop -v=7 --alsologtostderr
E0407 12:31:08.962949 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:33:06.718340 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-929265 stop -v=7 --alsologtostderr: (3m2.68593955s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr: exit status 7 (114.408795ms)

                                                
                                                
-- stdout --
	ha-929265
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-929265-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:34:09.005817 1261474 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:34:09.005953 1261474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:34:09.005963 1261474 out.go:358] Setting ErrFile to fd 2...
	I0407 12:34:09.005969 1261474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:34:09.006147 1261474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:34:09.006379 1261474 out.go:352] Setting JSON to false
	I0407 12:34:09.006425 1261474 mustload.go:65] Loading cluster: ha-929265
	I0407 12:34:09.006524 1261474 notify.go:220] Checking for updates...
	I0407 12:34:09.006849 1261474 config.go:182] Loaded profile config "ha-929265": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:34:09.006882 1261474 status.go:174] checking status of ha-929265 ...
	I0407 12:34:09.007338 1261474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:34:09.007402 1261474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:34:09.032431 1261474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
	I0407 12:34:09.032888 1261474 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:34:09.033481 1261474 main.go:141] libmachine: Using API Version  1
	I0407 12:34:09.033511 1261474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:34:09.033952 1261474 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:34:09.034160 1261474 main.go:141] libmachine: (ha-929265) Calling .GetState
	I0407 12:34:09.035799 1261474 status.go:371] ha-929265 host status = "Stopped" (err=<nil>)
	I0407 12:34:09.035816 1261474 status.go:384] host is not running, skipping remaining checks
	I0407 12:34:09.035834 1261474 status.go:176] ha-929265 status: &{Name:ha-929265 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:34:09.035879 1261474 status.go:174] checking status of ha-929265-m02 ...
	I0407 12:34:09.036214 1261474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:34:09.036311 1261474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:34:09.051428 1261474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0407 12:34:09.051932 1261474 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:34:09.052437 1261474 main.go:141] libmachine: Using API Version  1
	I0407 12:34:09.052458 1261474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:34:09.052760 1261474 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:34:09.052933 1261474 main.go:141] libmachine: (ha-929265-m02) Calling .GetState
	I0407 12:34:09.054551 1261474 status.go:371] ha-929265-m02 host status = "Stopped" (err=<nil>)
	I0407 12:34:09.054567 1261474 status.go:384] host is not running, skipping remaining checks
	I0407 12:34:09.054575 1261474 status.go:176] ha-929265-m02 status: &{Name:ha-929265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:34:09.054594 1261474 status.go:174] checking status of ha-929265-m04 ...
	I0407 12:34:09.054876 1261474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:34:09.054909 1261474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:34:09.069469 1261474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0407 12:34:09.069916 1261474 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:34:09.070496 1261474 main.go:141] libmachine: Using API Version  1
	I0407 12:34:09.070522 1261474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:34:09.070851 1261474 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:34:09.071065 1261474 main.go:141] libmachine: (ha-929265-m04) Calling .GetState
	I0407 12:34:09.072608 1261474 status.go:371] ha-929265-m04 host status = "Stopped" (err=<nil>)
	I0407 12:34:09.072693 1261474 status.go:384] host is not running, skipping remaining checks
	I0407 12:34:09.072726 1261474 status.go:176] ha-929265-m04 status: &{Name:ha-929265-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (182.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (163.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-929265 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0407 12:36:08.963292 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-929265 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m42.240313392s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (163.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-929265 --control-plane -v=7 --alsologtostderr
E0407 12:37:32.027384 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-929265 --control-plane -v=7 --alsologtostderr: (1m9.674880744s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-929265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-806876 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-806876 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m0.73224703s)
--- PASS: TestJSONOutput/start/Command (60.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-806876 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-806876 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.48s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-806876 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-806876 --output=json --user=testUser: (6.479073109s)
--- PASS: TestJSONOutput/stop/Command (6.48s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-082402 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-082402 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.148574ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f895d2b7-4f1a-4975-a1c4-34ce4ad7477d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-082402] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd47e574-d334-4b82-b080-ef9d6942d79b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20602"}}
	{"specversion":"1.0","id":"b56a91b6-f20a-46ce-984e-c5158b533bdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad204662-1a9b-4821-be7f-6d87737926b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig"}}
	{"specversion":"1.0","id":"3febf3d7-3951-49fa-ad71-92df47e3286f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube"}}
	{"specversion":"1.0","id":"7729685b-c571-4db2-a8c7-7e4dd7d5e6c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c41e298c-2c30-4ab1-b4d6-f63cd571da2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"13bddcce-3ec4-480c-8dc1-6d188a52ff28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-082402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-082402
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (98.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-794176 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-794176 --driver=kvm2  --container-runtime=containerd: (45.644182957s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-820566 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-820566 --driver=kvm2  --container-runtime=containerd: (50.158018597s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-794176
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-820566
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-820566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-820566
helpers_test.go:175: Cleaning up "first-794176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-794176
--- PASS: TestMinikubeProfile (98.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-324501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0407 12:41:08.964063 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-324501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.041887164s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-324501 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-324501 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-341251 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-341251 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.122015384s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-324501 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-341251
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-341251: (1.278777614s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-341251
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-341251: (22.356721181s)
--- PASS: TestMountStart/serial/RestartStopped (23.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-341251 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081777 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0407 12:43:06.718128 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081777 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m48.565464602s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-081777 -- rollout status deployment/busybox: (2.818720079s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-jn9kw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-k4gm8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-jn9kw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-k4gm8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-jn9kw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-k4gm8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-jn9kw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-jn9kw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-k4gm8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081777 -- exec busybox-58667487b6-k4gm8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-081777 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-081777 -v 3 --alsologtostderr: (52.943006142s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-081777 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp testdata/cp-test.txt multinode-081777:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1762381449/001/cp-test_multinode-081777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777:/home/docker/cp-test.txt multinode-081777-m02:/home/docker/cp-test_multinode-081777_multinode-081777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test_multinode-081777_multinode-081777-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777:/home/docker/cp-test.txt multinode-081777-m03:/home/docker/cp-test_multinode-081777_multinode-081777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test_multinode-081777_multinode-081777-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp testdata/cp-test.txt multinode-081777-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1762381449/001/cp-test_multinode-081777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m02:/home/docker/cp-test.txt multinode-081777:/home/docker/cp-test_multinode-081777-m02_multinode-081777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test_multinode-081777-m02_multinode-081777.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m02:/home/docker/cp-test.txt multinode-081777-m03:/home/docker/cp-test_multinode-081777-m02_multinode-081777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test_multinode-081777-m02_multinode-081777-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp testdata/cp-test.txt multinode-081777-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1762381449/001/cp-test_multinode-081777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m03:/home/docker/cp-test.txt multinode-081777:/home/docker/cp-test_multinode-081777-m03_multinode-081777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777 "sudo cat /home/docker/cp-test_multinode-081777-m03_multinode-081777.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 cp multinode-081777-m03:/home/docker/cp-test.txt multinode-081777-m02:/home/docker/cp-test_multinode-081777-m03_multinode-081777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 ssh -n multinode-081777-m02 "sudo cat /home/docker/cp-test_multinode-081777-m03_multinode-081777-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-081777 node stop m03: (1.286896045s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081777 status: exit status 7 (455.798692ms)

                                                
                                                
-- stdout --
	multinode-081777
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-081777-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-081777-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr: exit status 7 (433.638995ms)

                                                
                                                
-- stdout --
	multinode-081777
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-081777-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-081777-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:45:18.289072 1269009 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:45:18.289318 1269009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:45:18.289327 1269009 out.go:358] Setting ErrFile to fd 2...
	I0407 12:45:18.289331 1269009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:45:18.289515 1269009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:45:18.289698 1269009 out.go:352] Setting JSON to false
	I0407 12:45:18.289731 1269009 mustload.go:65] Loading cluster: multinode-081777
	I0407 12:45:18.289853 1269009 notify.go:220] Checking for updates...
	I0407 12:45:18.290101 1269009 config.go:182] Loaded profile config "multinode-081777": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:45:18.290132 1269009 status.go:174] checking status of multinode-081777 ...
	I0407 12:45:18.290821 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.290907 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.307364 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0407 12:45:18.307846 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.308480 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.308526 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.308918 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.309119 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetState
	I0407 12:45:18.311051 1269009 status.go:371] multinode-081777 host status = "Running" (err=<nil>)
	I0407 12:45:18.311068 1269009 host.go:66] Checking if "multinode-081777" exists ...
	I0407 12:45:18.311353 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.311391 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.327391 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0407 12:45:18.327845 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.328300 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.328325 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.328688 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.328911 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetIP
	I0407 12:45:18.331710 1269009 main.go:141] libmachine: (multinode-081777) DBG | domain multinode-081777 has defined MAC address 52:54:00:69:75:ac in network mk-multinode-081777
	I0407 12:45:18.332178 1269009 main.go:141] libmachine: (multinode-081777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:75:ac", ip: ""} in network mk-multinode-081777: {Iface:virbr1 ExpiryTime:2025-04-07 13:42:35 +0000 UTC Type:0 Mac:52:54:00:69:75:ac Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-081777 Clientid:01:52:54:00:69:75:ac}
	I0407 12:45:18.332214 1269009 main.go:141] libmachine: (multinode-081777) DBG | domain multinode-081777 has defined IP address 192.168.39.164 and MAC address 52:54:00:69:75:ac in network mk-multinode-081777
	I0407 12:45:18.332339 1269009 host.go:66] Checking if "multinode-081777" exists ...
	I0407 12:45:18.332653 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.332703 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.349138 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I0407 12:45:18.349623 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.350079 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.350109 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.350501 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.350681 1269009 main.go:141] libmachine: (multinode-081777) Calling .DriverName
	I0407 12:45:18.350927 1269009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:45:18.350959 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetSSHHostname
	I0407 12:45:18.353808 1269009 main.go:141] libmachine: (multinode-081777) DBG | domain multinode-081777 has defined MAC address 52:54:00:69:75:ac in network mk-multinode-081777
	I0407 12:45:18.354245 1269009 main.go:141] libmachine: (multinode-081777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:75:ac", ip: ""} in network mk-multinode-081777: {Iface:virbr1 ExpiryTime:2025-04-07 13:42:35 +0000 UTC Type:0 Mac:52:54:00:69:75:ac Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-081777 Clientid:01:52:54:00:69:75:ac}
	I0407 12:45:18.354283 1269009 main.go:141] libmachine: (multinode-081777) DBG | domain multinode-081777 has defined IP address 192.168.39.164 and MAC address 52:54:00:69:75:ac in network mk-multinode-081777
	I0407 12:45:18.354457 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetSSHPort
	I0407 12:45:18.354620 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetSSHKeyPath
	I0407 12:45:18.354780 1269009 main.go:141] libmachine: (multinode-081777) Calling .GetSSHUsername
	I0407 12:45:18.354899 1269009 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/multinode-081777/id_rsa Username:docker}
	I0407 12:45:18.433592 1269009 ssh_runner.go:195] Run: systemctl --version
	I0407 12:45:18.440050 1269009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:45:18.455907 1269009 kubeconfig.go:125] found "multinode-081777" server: "https://192.168.39.164:8443"
	I0407 12:45:18.455950 1269009 api_server.go:166] Checking apiserver status ...
	I0407 12:45:18.456000 1269009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:45:18.470814 1269009 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup
	W0407 12:45:18.480753 1269009 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1148/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 12:45:18.480825 1269009 ssh_runner.go:195] Run: ls
	I0407 12:45:18.485165 1269009 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0407 12:45:18.490243 1269009 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0407 12:45:18.490280 1269009 status.go:463] multinode-081777 apiserver status = Running (err=<nil>)
	I0407 12:45:18.490295 1269009 status.go:176] multinode-081777 status: &{Name:multinode-081777 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:45:18.490315 1269009 status.go:174] checking status of multinode-081777-m02 ...
	I0407 12:45:18.490648 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.490700 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.507130 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I0407 12:45:18.507624 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.508086 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.508108 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.508488 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.508680 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetState
	I0407 12:45:18.510293 1269009 status.go:371] multinode-081777-m02 host status = "Running" (err=<nil>)
	I0407 12:45:18.510311 1269009 host.go:66] Checking if "multinode-081777-m02" exists ...
	I0407 12:45:18.510649 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.510694 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.527471 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0407 12:45:18.527941 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.528378 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.528404 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.528758 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.528954 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetIP
	I0407 12:45:18.531576 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | domain multinode-081777-m02 has defined MAC address 52:54:00:6d:56:ea in network mk-multinode-081777
	I0407 12:45:18.531950 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:56:ea", ip: ""} in network mk-multinode-081777: {Iface:virbr1 ExpiryTime:2025-04-07 13:43:37 +0000 UTC Type:0 Mac:52:54:00:6d:56:ea Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:multinode-081777-m02 Clientid:01:52:54:00:6d:56:ea}
	I0407 12:45:18.531985 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | domain multinode-081777-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:6d:56:ea in network mk-multinode-081777
	I0407 12:45:18.532169 1269009 host.go:66] Checking if "multinode-081777-m02" exists ...
	I0407 12:45:18.532562 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.532614 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.548752 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I0407 12:45:18.549224 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.549704 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.549723 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.550058 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.550278 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .DriverName
	I0407 12:45:18.550470 1269009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:45:18.550492 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetSSHHostname
	I0407 12:45:18.553189 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | domain multinode-081777-m02 has defined MAC address 52:54:00:6d:56:ea in network mk-multinode-081777
	I0407 12:45:18.553633 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:56:ea", ip: ""} in network mk-multinode-081777: {Iface:virbr1 ExpiryTime:2025-04-07 13:43:37 +0000 UTC Type:0 Mac:52:54:00:6d:56:ea Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:multinode-081777-m02 Clientid:01:52:54:00:6d:56:ea}
	I0407 12:45:18.553672 1269009 main.go:141] libmachine: (multinode-081777-m02) DBG | domain multinode-081777-m02 has defined IP address 192.168.39.45 and MAC address 52:54:00:6d:56:ea in network mk-multinode-081777
	I0407 12:45:18.553779 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetSSHPort
	I0407 12:45:18.553975 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetSSHKeyPath
	I0407 12:45:18.554122 1269009 main.go:141] libmachine: (multinode-081777-m02) Calling .GetSSHUsername
	I0407 12:45:18.554280 1269009 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20602-1236688/.minikube/machines/multinode-081777-m02/id_rsa Username:docker}
	I0407 12:45:18.637922 1269009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:45:18.653375 1269009 status.go:176] multinode-081777-m02 status: &{Name:multinode-081777-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:45:18.653427 1269009 status.go:174] checking status of multinode-081777-m03 ...
	I0407 12:45:18.653776 1269009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:45:18.653821 1269009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:45:18.670225 1269009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46501
	I0407 12:45:18.670700 1269009 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:45:18.671128 1269009 main.go:141] libmachine: Using API Version  1
	I0407 12:45:18.671153 1269009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:45:18.671534 1269009 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:45:18.671724 1269009 main.go:141] libmachine: (multinode-081777-m03) Calling .GetState
	I0407 12:45:18.673348 1269009 status.go:371] multinode-081777-m03 host status = "Stopped" (err=<nil>)
	I0407 12:45:18.673363 1269009 status.go:384] host is not running, skipping remaining checks
	I0407 12:45:18.673369 1269009 status.go:176] multinode-081777-m03 status: &{Name:multinode-081777-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-081777 node start m03 -v=7 --alsologtostderr: (33.543504184s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (309.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081777
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-081777
E0407 12:46:08.964536 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:46:09.789743 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:48:06.724831 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-081777: (3m2.702336122s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081777 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081777 --wait=true -v=8 --alsologtostderr: (2m7.075613451s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081777
--- PASS: TestMultiNode/serial/RestartKeepsNodes (309.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-081777 node delete m03: (1.579800345s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 stop
E0407 12:51:08.962557 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
E0407 12:53:06.717649 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-081777 stop: (3m1.64697556s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081777 status: exit status 7 (88.130221ms)

                                                
                                                
-- stdout --
	multinode-081777
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-081777-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr: exit status 7 (85.219967ms)

                                                
                                                
-- stdout --
	multinode-081777
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-081777-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:54:06.637212 1271646 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:54:06.637381 1271646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:54:06.637393 1271646 out.go:358] Setting ErrFile to fd 2...
	I0407 12:54:06.637399 1271646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:54:06.637583 1271646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 12:54:06.637747 1271646 out.go:352] Setting JSON to false
	I0407 12:54:06.637782 1271646 mustload.go:65] Loading cluster: multinode-081777
	I0407 12:54:06.637851 1271646 notify.go:220] Checking for updates...
	I0407 12:54:06.638362 1271646 config.go:182] Loaded profile config "multinode-081777": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0407 12:54:06.638398 1271646 status.go:174] checking status of multinode-081777 ...
	I0407 12:54:06.638898 1271646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:54:06.638992 1271646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:54:06.654576 1271646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33097
	I0407 12:54:06.655001 1271646 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:54:06.655507 1271646 main.go:141] libmachine: Using API Version  1
	I0407 12:54:06.655530 1271646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:54:06.655962 1271646 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:54:06.656194 1271646 main.go:141] libmachine: (multinode-081777) Calling .GetState
	I0407 12:54:06.657898 1271646 status.go:371] multinode-081777 host status = "Stopped" (err=<nil>)
	I0407 12:54:06.657914 1271646 status.go:384] host is not running, skipping remaining checks
	I0407 12:54:06.657920 1271646 status.go:176] multinode-081777 status: &{Name:multinode-081777 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 12:54:06.657941 1271646 status.go:174] checking status of multinode-081777-m02 ...
	I0407 12:54:06.658224 1271646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0407 12:54:06.658282 1271646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0407 12:54:06.673365 1271646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I0407 12:54:06.673806 1271646 main.go:141] libmachine: () Calling .GetVersion
	I0407 12:54:06.674218 1271646 main.go:141] libmachine: Using API Version  1
	I0407 12:54:06.674238 1271646 main.go:141] libmachine: () Calling .SetConfigRaw
	I0407 12:54:06.674604 1271646 main.go:141] libmachine: () Calling .GetMachineName
	I0407 12:54:06.674753 1271646 main.go:141] libmachine: (multinode-081777-m02) Calling .GetState
	I0407 12:54:06.676229 1271646 status.go:371] multinode-081777-m02 host status = "Stopped" (err=<nil>)
	I0407 12:54:06.676243 1271646 status.go:384] host is not running, skipping remaining checks
	I0407 12:54:06.676249 1271646 status.go:176] multinode-081777-m02 status: &{Name:multinode-081777-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (107.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081777 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0407 12:54:12.029546 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081777 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m47.223528166s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081777 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (107.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081777
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081777-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-081777-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (67.44019ms)

                                                
                                                
-- stdout --
	* [multinode-081777-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-081777-m02' is duplicated with machine name 'multinode-081777-m02' in profile 'multinode-081777'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081777-m03 --driver=kvm2  --container-runtime=containerd
E0407 12:56:08.962451 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081777-m03 --driver=kvm2  --container-runtime=containerd: (43.298038739s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-081777
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-081777: exit status 80 (222.516166ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-081777 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-081777-m03 already exists in multinode-081777-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-081777-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.35s)

                                                
                                    
x
+
TestPreload (252.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-808144 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0407 12:58:06.718635 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-808144 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m54.352349967s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-808144 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-808144 image pull gcr.io/k8s-minikube/busybox: (1.591692638s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-808144
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-808144: (1m30.856623892s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-808144 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-808144 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (44.346251998s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-808144 image list
helpers_test.go:175: Cleaning up "test-preload-808144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-808144
--- PASS: TestPreload (252.36s)

                                                
                                    
x
+
TestScheduledStopUnix (115.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-052670 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0407 13:01:08.964303 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-052670 --memory=2048 --driver=kvm2  --container-runtime=containerd: (43.775372063s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052670 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-052670 -n scheduled-stop-052670
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052670 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:01:36.653459 1243895 retry.go:31] will retry after 69.876µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.654662 1243895 retry.go:31] will retry after 76.653µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.655818 1243895 retry.go:31] will retry after 230.022µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.656988 1243895 retry.go:31] will retry after 465.654µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.658146 1243895 retry.go:31] will retry after 624.548µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.659313 1243895 retry.go:31] will retry after 530.096µs: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.660438 1243895 retry.go:31] will retry after 1.312088ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.662648 1243895 retry.go:31] will retry after 1.917988ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.664855 1243895 retry.go:31] will retry after 3.515081ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.669051 1243895 retry.go:31] will retry after 5.698797ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.675287 1243895 retry.go:31] will retry after 3.952569ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.679492 1243895 retry.go:31] will retry after 8.364255ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.688722 1243895 retry.go:31] will retry after 12.918158ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.701944 1243895 retry.go:31] will retry after 21.434445ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
I0407 13:01:36.723603 1243895 retry.go:31] will retry after 16.238337ms: open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/scheduled-stop-052670/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052670 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052670 -n scheduled-stop-052670
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052670
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-052670 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-052670
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-052670: exit status 7 (68.992769ms)

                                                
                                                
-- stdout --
	scheduled-stop-052670
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052670 -n scheduled-stop-052670
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-052670 -n scheduled-stop-052670: exit status 7 (63.59713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-052670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-052670
--- PASS: TestScheduledStopUnix (115.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (194.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4194366845 start -p running-upgrade-065820 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0407 13:02:49.791180 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:03:06.717950 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4194366845 start -p running-upgrade-065820 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m10.802530725s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-065820 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-065820 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m2.176294805s)
helpers_test.go:175: Cleaning up "running-upgrade-065820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-065820
--- PASS: TestRunningBinaryUpgrade (194.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (190.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m10.004718545s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-766275
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-766275: (1.38367825s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-766275 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-766275 status --format={{.Host}}: exit status 7 (76.170157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.624076948s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-766275 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (90.682082ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-766275] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-766275
	    minikube start -p kubernetes-upgrade-766275 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7662752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-766275 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0407 13:08:06.718332 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-766275 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (49.795825637s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-766275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-766275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-766275: (1.079390672s)
--- PASS: TestKubernetesUpgrade (190.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (186.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-333676 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-333676 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m6.309655117s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (186.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (85.675543ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-487738] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-487738 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-487738 --driver=kvm2  --container-runtime=containerd: (1m37.53584419s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-487738 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (52.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (51.032319986s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-487738 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-487738 status -o json: exit status 2 (302.220004ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-487738","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-487738
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (52.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-487738 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.291824812s)
--- PASS: TestNoKubernetes/serial/Start (27.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-639347 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-639347 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (118.207645ms)

                                                
                                                
-- stdout --
	* [false-639347] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20602
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:05:23.007299 1277849 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:05:23.007434 1277849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:23.007442 1277849 out.go:358] Setting ErrFile to fd 2...
	I0407 13:05:23.007449 1277849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:05:23.007690 1277849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20602-1236688/.minikube/bin
	I0407 13:05:23.008284 1277849 out.go:352] Setting JSON to false
	I0407 13:05:23.009410 1277849 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":31669,"bootTime":1743999454,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 13:05:23.009473 1277849 start.go:139] virtualization: kvm guest
	I0407 13:05:23.012054 1277849 out.go:177] * [false-639347] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 13:05:23.013341 1277849 out.go:177]   - MINIKUBE_LOCATION=20602
	I0407 13:05:23.013382 1277849 notify.go:220] Checking for updates...
	I0407 13:05:23.017294 1277849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:05:23.019339 1277849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20602-1236688/kubeconfig
	I0407 13:05:23.020629 1277849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20602-1236688/.minikube
	I0407 13:05:23.021815 1277849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 13:05:23.022983 1277849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:05:23.024552 1277849 config.go:182] Loaded profile config "NoKubernetes-487738": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0407 13:05:23.024723 1277849 config.go:182] Loaded profile config "old-k8s-version-333676": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0407 13:05:23.024823 1277849 config.go:182] Loaded profile config "running-upgrade-065820": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0407 13:05:23.024922 1277849 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:05:23.065937 1277849 out.go:177] * Using the kvm2 driver based on user configuration
	I0407 13:05:23.067144 1277849 start.go:297] selected driver: kvm2
	I0407 13:05:23.067164 1277849 start.go:901] validating driver "kvm2" against <nil>
	I0407 13:05:23.067180 1277849 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:05:23.069364 1277849 out.go:201] 
	W0407 13:05:23.070692 1277849 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0407 13:05:23.071963 1277849 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-639347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.204:8443
name: old-k8s-version-333676
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.69:8443
name: running-upgrade-065820
contexts:
- context:
cluster: old-k8s-version-333676
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-333676
name: old-k8s-version-333676
- context:
cluster: running-upgrade-065820
user: running-upgrade-065820
name: running-upgrade-065820
current-context: running-upgrade-065820
kind: Config
preferences: {}
users:
- name: old-k8s-version-333676
user:
client-certificate: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt
client-key: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.key
- name: running-upgrade-065820
user:
client-certificate: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/running-upgrade-065820/client.crt
client-key: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/running-upgrade-065820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-639347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-639347"

                                                
                                                
----------------------- debugLogs end: false-639347 [took: 3.237785121s] --------------------------------
helpers_test.go:175: Cleaning up "false-639347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-639347
--- PASS: TestNetworkPlugins/group/false (3.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-487738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-487738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.509837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-487738
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-487738: (1.309248482s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-487738 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-487738 --driver=kvm2  --container-runtime=containerd: (39.332081769s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-333676 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17866c7c-8c2c-437a-ad51-389ccfc6871e] Pending
helpers_test.go:344: "busybox" [17866c7c-8c2c-437a-ad51-389ccfc6871e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [17866c7c-8c2c-437a-ad51-389ccfc6871e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003871794s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-333676 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.67574969 start -p stopped-upgrade-700247 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.67574969 start -p stopped-upgrade-700247 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m17.717642654s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.67574969 -p stopped-upgrade-700247 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.67574969 -p stopped-upgrade-700247 stop: (2.02079476s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-700247 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-700247 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.931924245s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-333676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-333676 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (90.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-333676 --alsologtostderr -v=3
E0407 13:06:08.963157 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-333676 --alsologtostderr -v=3: (1m30.913739509s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (90.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-487738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-487738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.116672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (110.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243989 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-243989 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m50.294361224s)
--- PASS: TestPause/serial/Start (110.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333676 -n old-k8s-version-333676
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333676 -n old-k8s-version-333676: exit status 7 (79.973836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-333676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (186.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-333676 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-333676 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m5.799955138s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-333676 -n old-k8s-version-333676
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (186.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-700247
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-700247: (2.50140332s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.50s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-243989 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-243989 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (48.961095778s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-059552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-059552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m25.640317921s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-074365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-074365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m34.429484981s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-243989 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-243989 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-243989 --output=json --layout=cluster: exit status 2 (265.943417ms)

                                                
                                                
-- stdout --
	{"Name":"pause-243989","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-243989","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-243989 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-243989 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-243989 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.534617887s)
--- PASS: TestPause/serial/VerifyDeletedResources (1.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-310575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-310575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m10.094247426s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-059552 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3496d9a1-a4e4-4db2-b463-ca70eaca3b83] Pending
helpers_test.go:344: "busybox" [3496d9a1-a4e4-4db2-b463-ca70eaca3b83] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005509561s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-059552 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-059552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-059552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-059552 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-059552 --alsologtostderr -v=3: (1m31.457135193s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-074365 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a8f9c0c9-bf8d-4a06-a20f-81217be7bbdd] Pending
helpers_test.go:344: "busybox" [a8f9c0c9-bf8d-4a06-a20f-81217be7bbdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a8f9c0c9-bf8d-4a06-a20f-81217be7bbdd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.006140667s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-074365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-310575 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e15156fd-dfc3-464d-a268-12297343e290] Pending
helpers_test.go:344: "busybox" [e15156fd-dfc3-464d-a268-12297343e290] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e15156fd-dfc3-464d-a268-12297343e290] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003864401s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-310575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-074365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-074365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-074365 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-074365 --alsologtostderr -v=3: (1m31.309034343s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-310575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-310575 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-310575 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-310575 --alsologtostderr -v=3: (1m30.968896273s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xpzz9" [85276fb5-8ff2-439c-b36b-480333969f52] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004449325s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xpzz9" [85276fb5-8ff2-439c-b36b-480333969f52] Running
E0407 13:10:52.031142 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003285125s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-333676 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-333676 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-333676 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333676 -n old-k8s-version-333676
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333676 -n old-k8s-version-333676: exit status 2 (243.159348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333676 -n old-k8s-version-333676
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333676 -n old-k8s-version-333676: exit status 2 (248.084931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-333676 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-333676 -n old-k8s-version-333676
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-333676 -n old-k8s-version-333676
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-167674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
E0407 13:11:08.962885 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/functional-233546/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-167674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (51.285408911s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059552 -n embed-certs-059552
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059552 -n embed-certs-059552: exit status 7 (81.131683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-059552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (314.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-059552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-059552 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m14.092685829s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-059552 -n embed-certs-059552
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (314.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-167674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-167674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.174641644s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-167674 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-167674 --alsologtostderr -v=3: (2.318297694s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-167674 -n newest-cni-167674
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-167674 -n newest-cni-167674: exit status 7 (73.903338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-167674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-167674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-167674 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (38.397387412s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-167674 -n newest-cni-167674
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365: exit status 7 (76.977348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-074365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-074365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-074365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m21.141283068s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-310575 -n no-preload-310575
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-310575 -n no-preload-310575: exit status 7 (90.065499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-310575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (321.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-310575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-310575 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.32.2: (5m21.286585638s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-310575 -n no-preload-310575
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (321.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-167674 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-167674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-167674 -n newest-cni-167674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-167674 -n newest-cni-167674: exit status 2 (251.236914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-167674 -n newest-cni-167674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-167674 -n newest-cni-167674: exit status 2 (252.579702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-167674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-167674 -n newest-cni-167674
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-167674 -n newest-cni-167674
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9zbpm" [584c4f8f-5573-4170-8556-0634038b4690] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9zbpm" [584c4f8f-5573-4170-8556-0634038b4690] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003734609s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9zbpm" [584c4f8f-5573-4170-8556-0634038b4690] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005122617s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-059552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-059552 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-059552 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059552 -n embed-certs-059552
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059552 -n embed-certs-059552: exit status 2 (280.996798ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059552 -n embed-certs-059552
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059552 -n embed-certs-059552: exit status 2 (266.648478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-059552 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-059552 -n embed-certs-059552
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-059552 -n embed-certs-059552
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r5t9v" [488a7c68-08a9-41c1-9b49-a1ccee607194] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r5t9v" [488a7c68-08a9-41c1-9b49-a1ccee607194] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.012024156s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5j9t6" [86719f31-0d71-4a81-af2c-1c0f0f76d76f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006897285s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-r5t9v" [488a7c68-08a9-41c1-9b49-a1ccee607194] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.158998968s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-074365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5j9t6" [86719f31-0d71-4a81-af2c-1c0f0f76d76f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004112158s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-310575 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-310575 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-310575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-310575 --alsologtostderr -v=1: (1.088195308s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-310575 -n no-preload-310575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-310575 -n no-preload-310575: exit status 2 (335.608665ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-310575 -n no-preload-310575
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-310575 -n no-preload-310575: exit status 2 (314.902733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-310575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-310575 -n no-preload-310575
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-310575 -n no-preload-310575
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-074365 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-074365 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-074365 --alsologtostderr -v=1: (1.026494148s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365: exit status 2 (343.303106ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365: exit status 2 (322.70742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-074365 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-074365 -n default-k8s-diff-port-074365
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)
E0407 13:20:54.636109 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:57.142066 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m12.3928235s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (104.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m44.51872098s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (104.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (138.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0407 13:18:06.717938 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m18.810031352s)
--- PASS: TestNetworkPlugins/group/calico/Start (138.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (124.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0407 13:18:38.497423 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m4.556102026s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (124.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-639347 "pgrep -a kubelet"
I0407 13:18:55.610076 1243895 config.go:182] Loaded profile config "auto-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-639347 replace --force -f testdata/netcat-deployment.yaml: (1.212888252s)
I0407 13:18:56.844659 1243895 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0407 13:18:56.859394 1243895 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vt8kq" [2603f16f-1f02-496d-b4f8-a661f8926970] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vt8kq" [2603f16f-1f02-496d-b4f8-a661f8926970] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006239919s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m10.21855356s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-s5x7s" [0232396b-fb8a-4307-beba-842b1c8c5e2a] Running
E0407 13:19:29.793025 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/addons-160798/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003350823s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-639347 "pgrep -a kubelet"
I0407 13:19:34.878945 1243895 config.go:182] Loaded profile config "kindnet-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vnrw8" [dbdb2340-ff68-43d4-aa42-1197c5f070ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vnrw8" [dbdb2340-ff68-43d4-aa42-1197c5f070ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00330199s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m15.322022351s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vz9sc" [26b4bff5-a170-4eb6-90c2-a0e6a659ace7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00445566s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-639347 "pgrep -a kubelet"
I0407 13:20:16.061473 1243895 config.go:182] Loaded profile config "calico-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-639347 replace --force -f testdata/netcat-deployment.yaml
E0407 13:20:16.165031 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.171489 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.182834 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.204267 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.245723 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.327220 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cqkqr" [6e218844-3a73-4050-a077-0f4af4f5c15e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0407 13:20:16.489597 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:16.811479 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:17.453266 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:18.735008 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-cqkqr" [6e218844-3a73-4050-a077-0f4af4f5c15e] Running
E0407 13:20:21.296714 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005807312s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-639347 "pgrep -a kubelet"
I0407 13:20:23.115633 1243895 config.go:182] Loaded profile config "custom-flannel-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rblks" [fa0ee9e3-e140-4b82-9501-502ff2a2d680] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0407 13:20:23.500314 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.506743 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.518117 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.539457 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.580923 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.662358 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:23.824616 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:24.146316 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:24.788061 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:20:26.069807 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-rblks" [fa0ee9e3-e140-4b82-9501-502ff2a2d680] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005129985s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-639347 exec deployment/netcat -- nslookup kubernetes.default
E0407 13:20:26.418428 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-639347 "pgrep -a kubelet"
I0407 13:20:34.886849 1243895 config.go:182] Loaded profile config "enable-default-cni-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6r6ks" [7c9b706b-f805-443d-9853-854a9f9a7a27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0407 13:20:36.660643 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/default-k8s-diff-port-074365/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-6r6ks" [7c9b706b-f805-443d-9853-854a9f9a7a27] Running
E0407 13:20:43.995949 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/no-preload-310575/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005237504s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-639347 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m27.964347281s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tp5m5" [df7dff55-b767-4607-af2b-3f7edd89d1f9] Running
E0407 13:21:22.339557 1243895 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00484854s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-639347 "pgrep -a kubelet"
I0407 13:21:23.777325 1243895 config.go:182] Loaded profile config "flannel-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-v84zx" [30d253a8-a509-4dcc-ab6a-ab6eb4ab7c5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-v84zx" [30d253a8-a509-4dcc-ab6a-ab6eb4ab7c5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004549497s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-639347 "pgrep -a kubelet"
I0407 13:22:13.971371 1243895 config.go:182] Loaded profile config "bridge-639347": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-639347 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-gpkh6" [0e0b72a9-f18e-4651-91ec-626c776838d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-gpkh6" [0e0b72a9-f18e-4651-91ec-626c776838d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003884244s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-639347 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-639347 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (39/329)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.2/cached-images 0
15 TestDownloadOnly/v1.32.2/binaries 0
16 TestDownloadOnly/v1.32.2/kubectl 0
20 TestDownloadOnlyKic 0
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
258 TestStartStop/group/disable-driver-mounts 0.18
269 TestNetworkPlugins/group/kubenet 3.8
277 TestNetworkPlugins/group/cilium 3.71
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-415826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-415826
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-639347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.204:8443
name: old-k8s-version-333676
contexts:
- context:
cluster: old-k8s-version-333676
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-333676
name: old-k8s-version-333676
current-context: ""
kind: Config
preferences: {}
users:
- name: old-k8s-version-333676
user:
client-certificate: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt
client-key: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-639347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-639347"

                                                
                                                
----------------------- debugLogs end: kubenet-639347 [took: 3.626023793s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-639347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-639347
--- SKIP: TestNetworkPlugins/group/kubenet (3.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-639347 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-639347" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.50.204:8443
name: old-k8s-version-333676
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20602-1236688/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:05:24 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.69:8443
name: running-upgrade-065820
contexts:
- context:
cluster: old-k8s-version-333676
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: old-k8s-version-333676
name: old-k8s-version-333676
- context:
cluster: running-upgrade-065820
user: running-upgrade-065820
name: running-upgrade-065820
current-context: running-upgrade-065820
kind: Config
preferences: {}
users:
- name: old-k8s-version-333676
user:
client-certificate: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.crt
client-key: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/old-k8s-version-333676/client.key
- name: running-upgrade-065820
user:
client-certificate: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/running-upgrade-065820/client.crt
client-key: /home/jenkins/minikube-integration/20602-1236688/.minikube/profiles/running-upgrade-065820/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-639347

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-639347" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-639347"

                                                
                                                
----------------------- debugLogs end: cilium-639347 [took: 3.540952274s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-639347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-639347
--- SKIP: TestNetworkPlugins/group/cilium (3.71s)

                                                
                                    
Copied to clipboard