Test Report: KVM_Linux 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (3/307)

Order failed test Duration
76 TestFunctional/parallel/DashboardCmd 4.74
199 TestMultiNode/serial/ValidateNameConflict 39.11
315 TestNetworkPlugins/group/kubenet/HairPin 59.44
x
+
TestFunctional/parallel/DashboardCmd (4.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:911: output didn't produce a URL
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] ...
functional_test.go:903: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] stdout:
functional_test.go:903: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-101929 --alsologtostderr -v=1] stderr:
I0114 10:22:48.366932   15909 out.go:296] Setting OutFile to fd 1 ...
I0114 10:22:48.367114   15909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.367121   15909 out.go:309] Setting ErrFile to fd 2...
I0114 10:22:48.367133   15909 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 10:22:48.367361   15909 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
I0114 10:22:48.368004   15909 mustload.go:65] Loading cluster: functional-101929
I0114 10:22:48.368489   15909 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:22:48.369043   15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.369098   15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.385348   15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46591
I0114 10:22:48.385757   15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.386330   15909 main.go:134] libmachine: Using API Version  1
I0114 10:22:48.386355   15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.386702   15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.386896   15909 main.go:134] libmachine: (functional-101929) Calling .GetState
I0114 10:22:48.388486   15909 host.go:66] Checking if "functional-101929" exists ...
I0114 10:22:48.388750   15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.388789   15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.405149   15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33789
I0114 10:22:48.405547   15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.406021   15909 main.go:134] libmachine: Using API Version  1
I0114 10:22:48.406049   15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.406414   15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.406573   15909 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.406688   15909 api_server.go:165] Checking apiserver status ...
I0114 10:22:48.406727   15909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 10:22:48.406752   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHHostname
I0114 10:22:48.409660   15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.410126   15909 main.go:134] libmachine: (functional-101929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:5a:e6", ip: ""} in network mk-functional-101929: {Iface:virbr1 ExpiryTime:2023-01-14 11:19:44 +0000 UTC Type:0 Mac:52:54:00:1c:5a:e6 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-101929 Clientid:01:52:54:00:1c:5a:e6}
I0114 10:22:48.410157   15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined IP address 192.168.39.97 and MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.410257   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHPort
I0114 10:22:48.410419   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHKeyPath
I0114 10:22:48.410513   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHUsername
I0114 10:22:48.410599   15909 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/functional-101929/id_rsa Username:docker}
I0114 10:22:48.521778   15909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8200/cgroup
I0114 10:22:48.537841   15909 api_server.go:181] apiserver freezer: "9:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9a85bf2ce5621968ebde74c119e86b.slice/docker-662249b9b6d3dfdde8f9e1885635babde59c608e53d06fde669650ea7da5d0bf.scope"
I0114 10:22:48.537907   15909 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9a85bf2ce5621968ebde74c119e86b.slice/docker-662249b9b6d3dfdde8f9e1885635babde59c608e53d06fde669650ea7da5d0bf.scope/freezer.state
I0114 10:22:48.551246   15909 api_server.go:203] freezer state: "THAWED"
I0114 10:22:48.551276   15909 api_server.go:252] Checking apiserver healthz at https://192.168.39.97:8441/healthz ...
I0114 10:22:48.558777   15909 api_server.go:278] https://192.168.39.97:8441/healthz returned 200:
ok
W0114 10:22:48.558826   15909 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0114 10:22:48.559042   15909 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 10:22:48.559052   15909 addons.go:65] Setting dashboard=true in profile "functional-101929"
I0114 10:22:48.559060   15909 addons.go:227] Setting addon dashboard=true in "functional-101929"
I0114 10:22:48.559086   15909 host.go:66] Checking if "functional-101929" exists ...
I0114 10:22:48.559447   15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.559484   15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.581964   15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:37381
I0114 10:22:48.582412   15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.582923   15909 main.go:134] libmachine: Using API Version  1
I0114 10:22:48.582952   15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.583259   15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.583796   15909 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0114 10:22:48.583834   15909 main.go:134] libmachine: Launching plugin server for driver kvm2
I0114 10:22:48.599320   15909 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45619
I0114 10:22:48.599745   15909 main.go:134] libmachine: () Calling .GetVersion
I0114 10:22:48.600239   15909 main.go:134] libmachine: Using API Version  1
I0114 10:22:48.600266   15909 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 10:22:48.600606   15909 main.go:134] libmachine: () Calling .GetMachineName
I0114 10:22:48.600758   15909 main.go:134] libmachine: (functional-101929) Calling .GetState
I0114 10:22:48.602600   15909 main.go:134] libmachine: (functional-101929) Calling .DriverName
I0114 10:22:48.607036   15909 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0114 10:22:48.608599   15909 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0114 10:22:48.609924   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0114 10:22:48.609949   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0114 10:22:48.609970   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHHostname
I0114 10:22:48.613382   15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.613778   15909 main.go:134] libmachine: (functional-101929) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:5a:e6", ip: ""} in network mk-functional-101929: {Iface:virbr1 ExpiryTime:2023-01-14 11:19:44 +0000 UTC Type:0 Mac:52:54:00:1c:5a:e6 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:functional-101929 Clientid:01:52:54:00:1c:5a:e6}
I0114 10:22:48.613803   15909 main.go:134] libmachine: (functional-101929) DBG | domain functional-101929 has defined IP address 192.168.39.97 and MAC address 52:54:00:1c:5a:e6 in network mk-functional-101929
I0114 10:22:48.614021   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHPort
I0114 10:22:48.614175   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHKeyPath
I0114 10:22:48.614304   15909 main.go:134] libmachine: (functional-101929) Calling .GetSSHUsername
I0114 10:22:48.614412   15909 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/functional-101929/id_rsa Username:docker}
I0114 10:22:48.732976   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0114 10:22:48.732997   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0114 10:22:48.753988   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0114 10:22:48.754011   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0114 10:22:48.791400   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0114 10:22:48.791426   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0114 10:22:48.818734   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0114 10:22:48.818755   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0114 10:22:48.849586   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
I0114 10:22:48.849610   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0114 10:22:48.879509   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0114 10:22:48.879529   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0114 10:22:48.909810   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0114 10:22:48.909837   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0114 10:22:48.937225   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0114 10:22:48.937250   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0114 10:22:48.966707   15909 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0114 10:22:48.966732   15909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0114 10:22:48.985882   15909 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0114 10:22:50.276180   15909 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.290252341s)
I0114 10:22:50.276250   15909 main.go:134] libmachine: Making call to close driver server
I0114 10:22:50.276270   15909 main.go:134] libmachine: (functional-101929) Calling .Close
I0114 10:22:50.276530   15909 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:22:50.276553   15909 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:22:50.276563   15909 main.go:134] libmachine: Making call to close driver server
I0114 10:22:50.276572   15909 main.go:134] libmachine: (functional-101929) Calling .Close
I0114 10:22:50.276768   15909 main.go:134] libmachine: (functional-101929) DBG | Closing plugin on server side
I0114 10:22:50.276809   15909 main.go:134] libmachine: Successfully made call to close driver server
I0114 10:22:50.276821   15909 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 10:22:50.278924   15909 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-101929 addons enable metrics-server	

                                                
                                                

                                                
                                                
I0114 10:22:50.280259   15909 addons.go:190] Writing out "functional-101929" config to set dashboard=true...
W0114 10:22:50.280511   15909 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0114 10:22:50.281205   15909 kapi.go:59] client config for functional-101929: &rest.Config{Host:"https://192.168.39.97:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.key", CAFile:"/home/jenkins/minikube-integration/15642-4002/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 10:22:50.290184   15909 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  0042f998-ead6-4020-94cd-3f5903597aa0 767 0 2023-01-14 10:22:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2023-01-14 10:22:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.22.190,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.22.190],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0114 10:22:50.290316   15909 out.go:239] * Launching proxy ...
* Launching proxy ...
I0114 10:22:50.290370   15909 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-101929 proxy --port 36195]
I0114 10:22:50.290615   15909 dashboard.go:157] Waiting for kubectl to output host:port ...
I0114 10:22:50.340031   15909 out.go:177] 
W0114 10:22:50.341841   15909 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: readByteWithTimeout: EOF
W0114 10:22:50.341865   15909 out.go:239] * 
* 
W0114 10:22:50.343987   15909 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0114 10:22:50.345583   15909 out.go:177] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-101929 -n functional-101929

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 logs -n 25

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 logs -n 25: (1.751241702s)

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command   |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh        | functional-101929 ssh findmnt                                            | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh -- ls                                              | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh cat                                                | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | /mount-9p/test-1673691756250419096                                       |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh stat                                               | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh stat                                               | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh sudo                                               | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| service    | functional-101929 service                                                | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | hello-node-connect --url                                                 |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh findmnt                                            | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount      | -p functional-101929                                                     | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | /tmp/TestFunctionalparallelMountCmdspecific-port3360609635/001:/mount-9p |                   |         |         |                     |                     |
	|            | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| service    | functional-101929 service list                                           | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	| service    | functional-101929 service                                                | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | --namespace=default --https                                              |                   |         |         |                     |                     |
	|            | --url hello-node                                                         |                   |         |         |                     |                     |
	| service    | functional-101929                                                        | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | service hello-node --url                                                 |                   |         |         |                     |                     |
	|            | --format={{.IP}}                                                         |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh findmnt                                            | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| service    | functional-101929 service                                                | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | hello-node --url                                                         |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh -- ls                                              | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| start      | -p functional-101929                                                     | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|            | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|            | --driver=kvm2                                                            |                   |         |         |                     |                     |
	| start      | -p functional-101929                                                     | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|            | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|            | --driver=kvm2                                                            |                   |         |         |                     |                     |
	| dashboard  | --url --port 36195                                                       | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | -p functional-101929                                                     |                   |         |         |                     |                     |
	|            | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| start      | -p functional-101929 --dry-run                                           | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|            | --driver=kvm2                                                            |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh sudo                                               | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh        | functional-101929 ssh sudo                                               | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC |                     |
	|            | systemctl is-active crio                                                 |                   |         |         |                     |                     |
	| license    |                                                                          | minikube          | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	| ssh        | functional-101929 ssh sudo cat                                           | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|            | /etc/test/nested/copy/10851/hosts                                        |                   |         |         |                     |                     |
	| docker-env | functional-101929 docker-env                                             | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	| docker-env | functional-101929 docker-env                                             | functional-101929 | jenkins | v1.28.0 | 14 Jan 23 10:22 UTC | 14 Jan 23 10:22 UTC |
	|------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:22:48
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:22:48.449461   15946 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:22:48.449569   15946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.449578   15946 out.go:309] Setting ErrFile to fd 2...
	I0114 10:22:48.449585   15946 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.449698   15946 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 10:22:48.450241   15946 out.go:303] Setting JSON to false
	I0114 10:22:48.451133   15946 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3915,"bootTime":1673687854,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:22:48.451193   15946 start.go:135] virtualization: kvm guest
	I0114 10:22:48.453545   15946 out.go:177] * [functional-101929] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:22:48.455106   15946 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:22:48.454992   15946 notify.go:220] Checking for updates...
	I0114 10:22:48.456581   15946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:22:48.458165   15946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 10:22:48.459759   15946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 10:22:48.461218   15946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:22:48.463149   15946 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:22:48.463718   15946 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.463782   15946 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.482743   15946 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44223
	I0114 10:22:48.483079   15946 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.483588   15946 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.483614   15946 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.483986   15946 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.484201   15946 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.484397   15946 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:22:48.484713   15946 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.484736   15946 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.499088   15946 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0114 10:22:48.499465   15946 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.499895   15946 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.499917   15946 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.500331   15946 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.500508   15946 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.534393   15946 out.go:177] * Using the kvm2 driver based on existing profile
	I0114 10:22:48.535652   15946 start.go:294] selected driver: kvm2
	I0114 10:22:48.535675   15946 start.go:838] validating driver "kvm2" against &{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:22:48.535843   15946 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:22:48.537009   15946 cni.go:95] Creating CNI manager for ""
	I0114 10:22:48.537031   15946 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:22:48.537045   15946 start_flags.go:319] config:
	{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:22:48.538642   15946 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-14 10:19:40 UTC, ends at Sat 2023-01-14 10:22:51 UTC. --
	Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990206598Z" level=info msg="shim disconnected" id=4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492
	Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990274218Z" level=warning msg="cleaning up after shim disconnected" id=4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492 namespace=moby
	Jan 14 10:22:42 functional-101929 dockerd[7031]: time="2023-01-14T10:22:42.990292245Z" level=info msg="cleaning up dead shim"
	Jan 14 10:22:43 functional-101929 dockerd[7031]: time="2023-01-14T10:22:43.012006286Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:22:42Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10179 runtime=io.containerd.runc.v2\n"
	Jan 14 10:22:44 functional-101929 dockerd[7025]: time="2023-01-14T10:22:44.150920177Z" level=info msg="ignoring event" container=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151476621Z" level=info msg="shim disconnected" id=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53
	Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151546439Z" level=warning msg="cleaning up after shim disconnected" id=0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53 namespace=moby
	Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.151558182Z" level=info msg="cleaning up dead shim"
	Jan 14 10:22:44 functional-101929 dockerd[7031]: time="2023-01-14T10:22:44.164342160Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:22:44Z\" level=info msg=\"starting signal loop\" namespace=moby pid=10213 runtime=io.containerd.runc.v2\n"
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385741640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385786393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.385968521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.386885951Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0b8bb602074f1626e9e9a014202a6ae03103f6b5159563cb7fa523a9fc5b9bfc pid=10522 runtime=io.containerd.runc.v2
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408494138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408574821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.408586431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.409002604Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8ab6f34547d848a8c5c277663f96e24242ca90d5f651a13e2650b30bd6766a77 pid=10540 runtime=io.containerd.runc.v2
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729260096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729347707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729360600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.729490994Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6ab99194b24a7f64a71ac8e0a0ce5a9df1ebfdbcb5a8bdde0e3a449847c16ca8 pid=10655 runtime=io.containerd.runc.v2
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885027826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885131476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.885149877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:22:50 functional-101929 dockerd[7031]: time="2023-01-14T10:22:50.887896325Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5dfca06ae9d25542b8dee48710f9fd7270d1f275fb90607ab6ea226aa258ee67 pid=10694 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	0b8bb602074f1       nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e                         1 second ago         Running             myfrontend                0                   d270042e693bf
	4fe6f4288a44c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   0c897e21caec7
	ee51d49becfd8       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         11 seconds ago       Running             echoserver                0                   2fa22299385e8
	cec9dbe775c94       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         11 seconds ago       Running             echoserver                0                   19e664be1ffc8
	57fd1ef65853c       6e38f40d628db                                                                                         36 seconds ago       Running             storage-provisioner       3                   2edd71218fb3a
	f4cf41b7a789b       5185b96f0becf                                                                                         36 seconds ago       Running             coredns                   3                   b33e8d57bcf3a
	346374e1a8105       beaaf00edd38a                                                                                         36 seconds ago       Running             kube-proxy                2                   2d16d6c469917
	d2acb554058ad       6d23ec0e8b87e                                                                                         43 seconds ago       Running             kube-scheduler            3                   b5a05bd4b5bfd
	63d23c5f3319d       a8a176a5d5d69                                                                                         43 seconds ago       Running             etcd                      3                   fb2246303985c
	d6ff8dba06d14       6039992312758                                                                                         44 seconds ago       Running             kube-controller-manager   3                   9853a039e0174
	662249b9b6d3d       0346dbd74bcb9                                                                                         44 seconds ago       Running             kube-apiserver            0                   30a60d02f02ef
	fe58c94596efc       0346dbd74bcb9                                                                                         About a minute ago   Exited              kube-apiserver            2                   5f7c1ec799c65
	354e7efc780ef       5185b96f0becf                                                                                         About a minute ago   Exited              coredns                   2                   65716b73ad53a
	a526f6daec052       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       2                   a4b749c848639
	0efe7d2376321       a8a176a5d5d69                                                                                         About a minute ago   Exited              etcd                      2                   331e4edc23c3f
	667c195cb4078       6039992312758                                                                                         About a minute ago   Exited              kube-controller-manager   2                   50dbedfb01b78
	c161ff402f218       6d23ec0e8b87e                                                                                         About a minute ago   Exited              kube-scheduler            2                   e0a64980087e7
	b00430b089413       beaaf00edd38a                                                                                         About a minute ago   Exited              kube-proxy                1                   579b275b3256b
	
	* 
	* ==> coredns [354e7efc780e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [f4cf41b7a789] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               functional-101929
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-101929
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=functional-101929
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_20_24_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:20:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-101929
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:22:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:22:13 +0000   Sat, 14 Jan 2023 10:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:22:13 +0000   Sat, 14 Jan 2023 10:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:22:13 +0000   Sat, 14 Jan 2023 10:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:22:13 +0000   Sat, 14 Jan 2023 10:20:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    functional-101929
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8d25365c3fc43349fafa995aab525ca
	  System UUID:                f8d25365-c3fc-4334-9faf-a995aab525ca
	  Boot ID:                    e6324964-2a4d-4979-8db2-1d6e2da96aae
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5fcdfb5cc4-p2jf4                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     hello-node-connect-6458c8fb6f-qmp48           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     mysql-596b7fcdbf-mphb5                        600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    2s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 coredns-565d847f94-prqt2                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m15s
	  kube-system                 etcd-functional-101929                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-apiserver-functional-101929              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 kube-controller-manager-functional-101929     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-proxy-wjfgl                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-scheduler-functional-101929              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f5c79dd8f-qvjjj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-f87d45d87-2qxk5          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m12s              kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 2m27s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s              kubelet          Node functional-101929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s              kubelet          Node functional-101929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s              kubelet          Node functional-101929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m16s              kubelet          Node functional-101929 status is now: NodeReady
	  Normal  RegisteredNode           2m16s              node-controller  Node functional-101929 event: Registered Node functional-101929 in Controller
	  Normal  NodeNotReady             116s               kubelet          Node functional-101929 status is now: NodeNotReady
	  Normal  RegisteredNode           78s                node-controller  Node functional-101929 event: Registered Node functional-101929 in Controller
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node functional-101929 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node functional-101929 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node functional-101929 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           25s                node-controller  Node functional-101929 event: Registered Node functional-101929 in Controller
	
	* 
	* ==> dmesg <==
	* [Jan14 10:20] systemd-fstab-generator[735]: Ignoring "noauto" for root device
	[  +3.877665] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.298388] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +0.112483] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	[  +0.101344] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[  +1.443152] systemd-fstab-generator[1070]: Ignoring "noauto" for root device
	[  +0.104745] systemd-fstab-generator[1081]: Ignoring "noauto" for root device
	[  +4.873231] systemd-fstab-generator[1346]: Ignoring "noauto" for root device
	[  +0.451045] kauditd_printk_skb: 68 callbacks suppressed
	[ +11.256737] systemd-fstab-generator[2011]: Ignoring "noauto" for root device
	[ +12.967335] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.050358] kauditd_printk_skb: 20 callbacks suppressed
	[  +3.879450] systemd-fstab-generator[3187]: Ignoring "noauto" for root device
	[  +0.148623] systemd-fstab-generator[3198]: Ignoring "noauto" for root device
	[  +0.155205] systemd-fstab-generator[3209]: Ignoring "noauto" for root device
	[Jan14 10:21] systemd-fstab-generator[4597]: Ignoring "noauto" for root device
	[  +0.145470] systemd-fstab-generator[4620]: Ignoring "noauto" for root device
	[ +10.516240] kauditd_printk_skb: 31 callbacks suppressed
	[ +24.395470] systemd-fstab-generator[6229]: Ignoring "noauto" for root device
	[  +0.177527] systemd-fstab-generator[6315]: Ignoring "noauto" for root device
	[  +0.174314] systemd-fstab-generator[6350]: Ignoring "noauto" for root device
	[Jan14 10:22] systemd-fstab-generator[7435]: Ignoring "noauto" for root device
	[  +0.134951] systemd-fstab-generator[7471]: Ignoring "noauto" for root device
	[  +2.116400] systemd-fstab-generator[7809]: Ignoring "noauto" for root device
	[  +8.181523] kauditd_printk_skb: 31 callbacks suppressed
	
	* 
	* ==> etcd [0efe7d237632] <==
	* {"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:21:16.761Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 3"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 4"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 4"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 4"}
	{"level":"info","ts":"2023-01-14T10:21:18.030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 4"}
	{"level":"info","ts":"2023-01-14T10:21:18.036Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:functional-101929 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:21:18.036Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:21:18.037Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:21:18.038Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.97:2379"}
	{"level":"info","ts":"2023-01-14T10:21:18.038Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:21:18.039Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:21:18.039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:21:46.619Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-14T10:21:46.619Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-101929","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	WARNING: 2023/01/14 10:21:46 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2023/01/14 10:21:46 [core] grpc: addrConn.createTransport failed to connect to {192.168.39.97:2379 192.168.39.97:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.39.97:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2023-01-14T10:21:46.646Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f61fae125a956d36","current-leader-member-id":"f61fae125a956d36"}
	{"level":"info","ts":"2023-01-14T10:21:46.649Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:21:46.650Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:21:46.650Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-101929","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"]}
	
	* 
	* ==> etcd [63d23c5f3319] <==
	* {"level":"info","ts":"2023-01-14T10:22:09.776Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.97:2380"}
	{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:22:09.777Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f61fae125a956d36","initial-advertise-peer-urls":["https://192.168.39.97:2380"],"listen-peer-urls":["https://192.168.39.97:2380"],"advertise-client-urls":["https://192.168.39.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:22:11.119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 is starting a new election at term 4"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgPreVoteResp from f61fae125a956d36 at term 4"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became candidate at term 5"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 received MsgVoteResp from f61fae125a956d36 at term 5"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f61fae125a956d36 became leader at term 5"}
	{"level":"info","ts":"2023-01-14T10:22:11.120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f61fae125a956d36 elected leader f61fae125a956d36 at term 5"}
	{"level":"info","ts":"2023-01-14T10:22:11.122Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f61fae125a956d36","local-member-attributes":"{Name:functional-101929 ClientURLs:[https://192.168.39.97:2379]}","request-path":"/0/members/f61fae125a956d36/attributes","cluster-id":"6e56e32a1e97f390","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:22:11.122Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:22:11.123Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:22:11.125Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.97:2379"}
	{"level":"warn","ts":"2023-01-14T10:22:50.088Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"335.019796ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7869624388982618964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" value_size:2687 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-01-14T10:22:50.089Z","caller":"traceutil/trace.go:171","msg":"trace[489555664] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"336.463864ms","start":"2023-01-14T10:22:49.752Z","end":"2023-01-14T10:22:50.089Z","steps":["trace[489555664] 'compare'  (duration: 334.675735ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:22:50.089Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:22:49.752Z","time spent":"336.802641ms","remote":"127.0.0.1:39558","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2767,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5\" value_size:2687 >> failure:<>"}
	{"level":"info","ts":"2023-01-14T10:22:50.095Z","caller":"traceutil/trace.go:171","msg":"trace[936770606] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"342.428132ms","start":"2023-01-14T10:22:49.752Z","end":"2023-01-14T10:22:50.095Z","steps":["trace[936770606] 'process raft request'  (duration: 342.080307ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-14T10:22:50.095Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T10:22:49.752Z","time spent":"342.488803ms","remote":"127.0.0.1:39544","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":987,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-key-holder\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kubernetes-dashboard/kubernetes-dashboard-key-holder\" value_size:909 >> failure:<>"}
	{"level":"info","ts":"2023-01-14T10:22:50.095Z","caller":"traceutil/trace.go:171","msg":"trace[1524391402] transaction","detail":"{read_only:false; response_revision:752; number_of_response:1; }","duration":"288.055657ms","start":"2023-01-14T10:22:49.807Z","end":"2023-01-14T10:22:50.095Z","steps":["trace[1524391402] 'process raft request'  (duration: 287.550274ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  10:22:52 up 3 min,  0 users,  load average: 2.14, 1.11, 0.44
	Linux functional-101929 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [662249b9b6d3] <==
	* I0114 10:22:13.304178       1 controller.go:85] Starting OpenAPI controller
	I0114 10:22:13.304191       1 controller.go:85] Starting OpenAPI V3 controller
	I0114 10:22:13.304203       1 naming_controller.go:291] Starting NamingConditionController
	I0114 10:22:13.304213       1 establishing_controller.go:76] Starting EstablishingController
	I0114 10:22:13.304219       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0114 10:22:13.304226       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0114 10:22:13.304232       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:22:13.443302       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:22:13.447121       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:22:14.005194       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:22:14.274787       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:22:14.989468       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:22:14.998121       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:22:15.040536       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:22:15.089206       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:22:15.097871       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:22:33.118628       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:22:34.460621       1 controller.go:616] quota admission added evaluator for: replicasets.apps
	I0114 10:22:34.579008       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.99.79.1]
	I0114 10:22:34.598815       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 10:22:34.885546       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.106.14.29]
	I0114 10:22:49.311896       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.105.193.71]
	I0114 10:22:49.422241       1 controller.go:616] quota admission added evaluator for: namespaces
	I0114 10:22:50.202735       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.22.190]
	I0114 10:22:50.243584       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.106.48.159]
	
	* 
	* ==> kube-apiserver [fe58c94596ef] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:22:00.161619       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:22:00.814178       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0114 10:22:01.062119       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [667c195cb407] <==
	* I0114 10:21:33.739022       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0114 10:21:33.744248       1 shared_informer.go:262] Caches are synced for namespace
	I0114 10:21:33.746544       1 shared_informer.go:262] Caches are synced for deployment
	I0114 10:21:33.755004       1 shared_informer.go:262] Caches are synced for ephemeral
	I0114 10:21:33.758510       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0114 10:21:33.760014       1 shared_informer.go:262] Caches are synced for stateful set
	I0114 10:21:33.761554       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0114 10:21:33.768094       1 shared_informer.go:262] Caches are synced for node
	I0114 10:21:33.768328       1 range_allocator.go:166] Starting range CIDR allocator
	I0114 10:21:33.768468       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0114 10:21:33.768751       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0114 10:21:33.769850       1 shared_informer.go:262] Caches are synced for expand
	I0114 10:21:33.773075       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0114 10:21:33.778299       1 shared_informer.go:262] Caches are synced for endpoint
	I0114 10:21:33.789045       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0114 10:21:33.791655       1 shared_informer.go:262] Caches are synced for PVC protection
	I0114 10:21:33.791752       1 shared_informer.go:262] Caches are synced for HPA
	I0114 10:21:33.812292       1 shared_informer.go:262] Caches are synced for disruption
	I0114 10:21:33.817900       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0114 10:21:33.871011       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:21:33.878451       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:21:33.918824       1 shared_informer.go:262] Caches are synced for attach detach
	I0114 10:21:34.307035       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:21:34.329122       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:21:34.329140       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [d6ff8dba06d1] <==
	* I0114 10:22:49.518439       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.526593       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0114 10:22:49.541388       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.542300       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0114 10:22:49.542312       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-f87d45d87 to 1"
	E0114 10:22:49.561250       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.561822       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0114 10:22:49.561836       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.578600       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0114 10:22:49.583274       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.583527       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.590159       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0114 10:22:49.590462       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.590489       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0114 10:22:49.590502       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.615048       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.615272       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.633985       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.634059       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.644361       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" failed with pods "dashboard-metrics-scraper-5f5c79dd8f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:49.644408       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5f5c79dd8f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0114 10:22:49.647659       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-f87d45d87-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0114 10:22:49.647844       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-f87d45d87" failed with pods "kubernetes-dashboard-f87d45d87-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0114 10:22:50.091052       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-f87d45d87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-f87d45d87-2qxk5"
	I0114 10:22:50.099661       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
	
	* 
	* ==> kube-proxy [346374e1a810] <==
	* I0114 10:22:15.826197       1 node.go:163] Successfully retrieved node IP: 192.168.39.97
	I0114 10:22:15.826264       1 server_others.go:138] "Detected node IP" address="192.168.39.97"
	I0114 10:22:15.826287       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:22:15.877550       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 10:22:15.877567       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:22:15.877585       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:22:15.877960       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:22:15.877992       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:22:15.886201       1 config.go:317] "Starting service config controller"
	I0114 10:22:15.886236       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:22:15.886314       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:22:15.886341       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:22:15.886931       1 config.go:444] "Starting node config controller"
	I0114 10:22:15.886962       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:22:15.986758       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:22:15.986794       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:22:15.987047       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b00430b08941] <==
	* I0114 10:21:21.363403       1 node.go:163] Successfully retrieved node IP: 192.168.39.97
	I0114 10:21:21.363940       1 server_others.go:138] "Detected node IP" address="192.168.39.97"
	I0114 10:21:21.364224       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:21:21.479608       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 10:21:21.479645       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:21:21.479786       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:21:21.480267       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:21:21.480302       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:21:21.484201       1 config.go:317] "Starting service config controller"
	I0114 10:21:21.484212       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:21:21.484234       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:21:21.484237       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:21:21.484869       1 config.go:444] "Starting node config controller"
	I0114 10:21:21.484904       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:21:21.584488       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:21:21.584547       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:21:21.585180       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [c161ff402f21] <==
	* E0114 10:21:21.272987       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0114 10:21:21.273036       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0114 10:21:21.273067       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0114 10:21:21.273117       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:21:21.273124       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:21:21.273159       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:21:21.273167       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:21:21.273230       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0114 10:21:21.273238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0114 10:21:21.273282       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0114 10:21:21.273289       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0114 10:21:21.273324       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0114 10:21:21.273331       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0114 10:21:21.273363       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:21:21.273371       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 10:21:21.273416       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0114 10:21:21.273425       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0114 10:21:21.273657       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0114 10:21:21.273734       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0114 10:21:22.638787       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:21:46.816227       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0114 10:21:46.816386       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0114 10:21:46.816656       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0114 10:21:46.816745       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0114 10:21:46.816790       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d2acb554058a] <==
	* I0114 10:22:10.492634       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:22:13.321482       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:22:13.321877       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:22:13.322176       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:22:13.322202       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:22:13.373767       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:22:13.373960       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:22:13.375917       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:22:13.379159       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:22:13.379361       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:22:13.379181       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:22:13.480218       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-14 10:19:40 UTC, ends at Sat 2023-01-14 10:22:52 UTC. --
	Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.754205    7815 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.870803    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") pod \"busybox-mount\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") " pod="default/busybox-mount"
	Jan 14 10:22:37 functional-101929 kubelet[7815]: I0114 10:22:37.870846    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") pod \"busybox-mount\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") " pod="default/busybox-mount"
	Jan 14 10:22:38 functional-101929 kubelet[7815]: I0114 10:22:38.971878    7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53"
	Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.663727    7815 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.782620    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04455ae2-8b2e-481a-b409-9519898adf8f\" (UniqueName: \"kubernetes.io/host-path/db28c053-2b62-4cfe-9a29-f19ea84f3788-pvc-04455ae2-8b2e-481a-b409-9519898adf8f\") pod \"sp-pod\" (UID: \"db28c053-2b62-4cfe-9a29-f19ea84f3788\") " pod="default/sp-pod"
	Jan 14 10:22:39 functional-101929 kubelet[7815]: I0114 10:22:39.782718    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czm5t\" (UniqueName: \"kubernetes.io/projected/db28c053-2b62-4cfe-9a29-f19ea84f3788-kube-api-access-czm5t\") pod \"sp-pod\" (UID: \"db28c053-2b62-4cfe-9a29-f19ea84f3788\") " pod="default/sp-pod"
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316212    7815 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") pod \"79f32661-00fa-4f08-8bdf-e3fccba88898\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") "
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316285    7815 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") pod \"79f32661-00fa-4f08-8bdf-e3fccba88898\" (UID: \"79f32661-00fa-4f08-8bdf-e3fccba88898\") "
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.316374    7815 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume" (OuterVolumeSpecName: "test-volume") pod "79f32661-00fa-4f08-8bdf-e3fccba88898" (UID: "79f32661-00fa-4f08-8bdf-e3fccba88898"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.320924    7815 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m" (OuterVolumeSpecName: "kube-api-access-kbd5m") pod "79f32661-00fa-4f08-8bdf-e3fccba88898" (UID: "79f32661-00fa-4f08-8bdf-e3fccba88898"). InnerVolumeSpecName "kube-api-access-kbd5m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.417284    7815 reconciler.go:399] "Volume detached for volume \"kube-api-access-kbd5m\" (UniqueName: \"kubernetes.io/projected/79f32661-00fa-4f08-8bdf-e3fccba88898-kube-api-access-kbd5m\") on node \"functional-101929\" DevicePath \"\""
	Jan 14 10:22:44 functional-101929 kubelet[7815]: I0114 10:22:44.417311    7815 reconciler.go:399] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/79f32661-00fa-4f08-8bdf-e3fccba88898-test-volume\") on node \"functional-101929\" DevicePath \"\""
	Jan 14 10:22:45 functional-101929 kubelet[7815]: I0114 10:22:45.112820    7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0c897e21caec7104cb976b08f41f4d4c391aa7b7d6bc56b4566d69244e7ccc53"
	Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.352607    7815 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:22:49 functional-101929 kubelet[7815]: E0114 10:22:49.352755    7815 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="79f32661-00fa-4f08-8bdf-e3fccba88898" containerName="mount-munger"
	Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.352801    7815 memory_manager.go:345] "RemoveStaleState removing state" podUID="79f32661-00fa-4f08-8bdf-e3fccba88898" containerName="mount-munger"
	Jan 14 10:22:49 functional-101929 kubelet[7815]: I0114 10:22:49.461952    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq992\" (UniqueName: \"kubernetes.io/projected/d2b9a27b-e145-480f-ab22-9370cbc49fe6-kube-api-access-pq992\") pod \"mysql-596b7fcdbf-mphb5\" (UID: \"d2b9a27b-e145-480f-ab22-9370cbc49fe6\") " pod="default/mysql-596b7fcdbf-mphb5"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.109746    7815 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.112637    7815 topology_manager.go:205] "Topology Admit Handler"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181283    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v824q\" (UniqueName: \"kubernetes.io/projected/f6e06a1c-11fe-4224-aa32-b29a20116240-kube-api-access-v824q\") pod \"kubernetes-dashboard-f87d45d87-2qxk5\" (UID: \"f6e06a1c-11fe-4224-aa32-b29a20116240\") " pod="kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181539    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkv8n\" (UniqueName: \"kubernetes.io/projected/ca82315e-15db-4b1b-a5b1-8697cd50e03a-kube-api-access-pkv8n\") pod \"dashboard-metrics-scraper-5f5c79dd8f-qvjjj\" (UID: \"ca82315e-15db-4b1b-a5b1-8697cd50e03a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.181753    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6e06a1c-11fe-4224-aa32-b29a20116240-tmp-volume\") pod \"kubernetes-dashboard-f87d45d87-2qxk5\" (UID: \"f6e06a1c-11fe-4224-aa32-b29a20116240\") " pod="kubernetes-dashboard/kubernetes-dashboard-f87d45d87-2qxk5"
	Jan 14 10:22:50 functional-101929 kubelet[7815]: I0114 10:22:50.182006    7815 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ca82315e-15db-4b1b-a5b1-8697cd50e03a-tmp-volume\") pod \"dashboard-metrics-scraper-5f5c79dd8f-qvjjj\" (UID: \"ca82315e-15db-4b1b-a5b1-8697cd50e03a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f5c79dd8f-qvjjj"
	Jan 14 10:22:51 functional-101929 kubelet[7815]: I0114 10:22:51.282159    7815 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8ab6f34547d848a8c5c277663f96e24242ca90d5f651a13e2650b30bd6766a77"
	
	* 
	* ==> storage-provisioner [57fd1ef65853] <==
	* I0114 10:22:15.704086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:22:15.719908       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:22:15.719962       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:22:33.121301       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:22:33.121452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1!
	I0114 10:22:33.123567       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1274b56a-b9dc-4a69-9faa-d5d7b21cd8f1", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1 became leader
	I0114 10:22:33.221885       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-101929_2e8e6ea1-4f97-4fdb-b546-e0223842f9b1!
	I0114 10:22:39.455852       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0114 10:22:39.455896       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    16f5c87b-7dd8-46fa-aec7-c11544d5ac2b 366 0 2023-01-14 10:20:39 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-01-14 10:20:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-04455ae2-8b2e-481a-b409-9519898adf8f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  04455ae2-8b2e-481a-b409-9519898adf8f 657 0 2023-01-14 10:22:39 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-01-14 10:22:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-01-14 10:22:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0114 10:22:39.456281       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f" provisioned
	I0114 10:22:39.456294       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0114 10:22:39.456302       1 volume_store.go:212] Trying to save persistentvolume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f"
	I0114 10:22:39.458124       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"04455ae2-8b2e-481a-b409-9519898adf8f", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0114 10:22:39.500526       1 volume_store.go:219] persistentvolume "pvc-04455ae2-8b2e-481a-b409-9519898adf8f" saved
	I0114 10:22:39.500624       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"04455ae2-8b2e-481a-b409-9519898adf8f", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-04455ae2-8b2e-481a-b409-9519898adf8f
	
	* 
	* ==> storage-provisioner [a526f6daec05] <==
	* I0114 10:21:16.243186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:21:21.359613       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:21:21.359950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:21:38.767181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:21:38.767626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6!
	I0114 10:21:38.768243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1274b56a-b9dc-4a69-9faa-d5d7b21cd8f1", APIVersion:"v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6 became leader
	I0114 10:21:38.868626       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-101929_72dc03ae-36b7-46e4-b44b-047bf21362e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-101929 -n functional-101929

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:261: (dbg) Run:  kubectl --context functional-101929 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5: exit status 1 (94.312942ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-101929/192.168.39.97
	Start Time:       Sat, 14 Jan 2023 10:22:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               172.17.0.5
	IPs:
	  IP:  172.17.0.5
	Containers:
	  mount-munger:
	    Container ID:  docker://4fe6f4288a44c734a242b9c9120e6c7f2c8665fdaff3e87a560f62a660dd2492
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 14 Jan 2023 10:22:42 +0000
	      Finished:     Sat, 14 Jan 2023 10:22:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbd5m (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-kbd5m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15s   default-scheduler  Successfully assigned default/busybox-mount to functional-101929
	  Normal  Pulling    15s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.802070875s
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger
	
	
	Name:             mysql-596b7fcdbf-mphb5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-101929/192.168.39.97
	Start Time:       Sat, 14 Jan 2023 10:22:49 +0000
	Labels:           app=mysql
	                  pod-template-hash=596b7fcdbf
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-596b7fcdbf
	Containers:
	  mysql:
	    Container ID:   
	    Image:          mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pq992 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-pq992:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/mysql-596b7fcdbf-mphb5 to functional-101929
	  Normal  Pulling    2s    kubelet            Pulling image "mysql:5.7"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f5c79dd8f-qvjjj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-f87d45d87-2qxk5" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-101929 describe pod busybox-mount mysql-596b7fcdbf-mphb5 dashboard-metrics-scraper-5f5c79dd8f-qvjjj kubernetes-dashboard-f87d45d87-2qxk5: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103057
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103057-m02 --driver=kvm2 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-103057-m02 --driver=kvm2 : exit status 14 (88.747105ms)

                                                
                                                
-- stdout --
	* [multinode-103057-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-103057-m02' is duplicated with machine name 'multinode-103057-m02' in profile 'multinode-103057'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103057-m03 --driver=kvm2 
E0114 11:00:37.242952   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 11:00:40.543637   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-103057-m03 --driver=kvm2 : signal: killed (37.039174796s)

                                                
                                                
-- stdout --
	* [multinode-103057-m03] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on user configuration
	* Starting control plane node multinode-103057-m03 in cluster multinode-103057-m03
	* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...

                                                
                                                
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-103057-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103057
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-103057: context deadline exceeded (1.116µs)
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-103057-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-103057-m03: context deadline exceeded (158ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-103057-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-103057 -n multinode-103057
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-103057 logs -n 25: (1.182013125s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-103057 cp multinode-103057-m02:/home/docker/cp-test.txt                       | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03:/home/docker/cp-test_multinode-103057-m02_multinode-103057-m03.txt |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n                                                                 | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m02 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n multinode-103057-m03 sudo cat                                   | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | /home/docker/cp-test_multinode-103057-m02_multinode-103057-m03.txt                      |                      |         |         |                     |                     |
	| cp      | multinode-103057 cp testdata/cp-test.txt                                                | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03:/home/docker/cp-test.txt                                           |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n                                                                 | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt                       | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile122000101/001/cp-test_multinode-103057-m03.txt          |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n                                                                 | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt                       | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057:/home/docker/cp-test_multinode-103057-m03_multinode-103057.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n                                                                 | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n multinode-103057 sudo cat                                       | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | /home/docker/cp-test_multinode-103057-m03_multinode-103057.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt                       | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m02:/home/docker/cp-test_multinode-103057-m03_multinode-103057-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n                                                                 | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | multinode-103057-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103057 ssh -n multinode-103057-m02 sudo cat                                   | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	|         | /home/docker/cp-test_multinode-103057-m03_multinode-103057-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-103057 node stop m03                                                          | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:34 UTC |
	| node    | multinode-103057 node start                                                             | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:34 UTC | 14 Jan 23 10:35 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-103057                                                                | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:35 UTC |                     |
	| stop    | -p multinode-103057                                                                     | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:35 UTC | 14 Jan 23 10:35 UTC |
	| start   | -p multinode-103057                                                                     | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:35 UTC | 14 Jan 23 10:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-103057                                                                | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC |                     |
	| node    | multinode-103057 node delete                                                            | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-103057 stop                                                                   | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 10:50 UTC |
	| start   | -p multinode-103057                                                                     | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 10:50 UTC | 14 Jan 23 11:00 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	| node    | list -p multinode-103057                                                                | multinode-103057     | jenkins | v1.28.0 | 14 Jan 23 11:00 UTC |                     |
	| start   | -p multinode-103057-m02                                                                 | multinode-103057-m02 | jenkins | v1.28.0 | 14 Jan 23 11:00 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	| start   | -p multinode-103057-m03                                                                 | multinode-103057-m03 | jenkins | v1.28.0 | 14 Jan 23 11:00 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 11:00:20
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 11:00:20.544053   23697 out.go:296] Setting OutFile to fd 1 ...
	I0114 11:00:20.544150   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:00:20.544154   23697 out.go:309] Setting ErrFile to fd 2...
	I0114 11:00:20.544157   23697 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:00:20.544251   23697 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 11:00:20.544791   23697 out.go:303] Setting JSON to false
	I0114 11:00:20.545572   23697 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6167,"bootTime":1673687854,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 11:00:20.545617   23697 start.go:135] virtualization: kvm guest
	I0114 11:00:20.548088   23697 out.go:177] * [multinode-103057-m03] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 11:00:20.549698   23697 notify.go:220] Checking for updates...
	I0114 11:00:20.549718   23697 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 11:00:20.551460   23697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 11:00:20.553007   23697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 11:00:20.554516   23697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 11:00:20.555980   23697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 11:00:20.557810   23697 config.go:180] Loaded profile config "multinode-103057": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 11:00:20.557854   23697 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 11:00:20.594443   23697 out.go:177] * Using the kvm2 driver based on user configuration
	I0114 11:00:20.595928   23697 start.go:294] selected driver: kvm2
	I0114 11:00:20.595939   23697 start.go:838] validating driver "kvm2" against <nil>
	I0114 11:00:20.595954   23697 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 11:00:20.596209   23697 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 11:00:20.596453   23697 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-4002/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 11:00:20.611550   23697 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 11:00:20.611610   23697 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 11:00:20.612008   23697 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0114 11:00:20.612099   23697 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 11:00:20.612111   23697 cni.go:95] Creating CNI manager for ""
	I0114 11:00:20.612115   23697 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 11:00:20.612119   23697 start_flags.go:319] config:
	{Name:multinode-103057-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-103057-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 11:00:20.612201   23697 iso.go:125] acquiring lock: {Name:mkc2d7f29725a7214ea1a3adcbd594f3dbbcd423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 11:00:20.614145   23697 out.go:177] * Starting control plane node multinode-103057-m03 in cluster multinode-103057-m03
	I0114 11:00:20.615412   23697 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 11:00:20.615454   23697 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 11:00:20.615462   23697 cache.go:57] Caching tarball of preloaded images
	I0114 11:00:20.615547   23697 preload.go:174] Found /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0114 11:00:20.615559   23697 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I0114 11:00:20.615636   23697 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03/config.json ...
	I0114 11:00:20.615647   23697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03/config.json: {Name:mkde780066d21edaf7cce3edd7b9f709e210aec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 11:00:20.615772   23697 cache.go:193] Successfully downloaded all kic artifacts
	I0114 11:00:20.615790   23697 start.go:364] acquiring machines lock for multinode-103057-m03: {Name:mkb642da8b535e95c9c2973423d696df46349e3f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 11:00:20.615824   23697 start.go:368] acquired machines lock for "multinode-103057-m03" in 26.343µs
	I0114 11:00:20.615832   23697 start.go:93] Provisioning new machine with config: &{Name:multinode-103057-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-103057-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0114 11:00:20.615878   23697 start.go:125] createHost starting for "" (driver="kvm2")
	I0114 11:00:20.617649   23697 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0114 11:00:20.617788   23697 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 11:00:20.617821   23697 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 11:00:20.632297   23697 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0114 11:00:20.632660   23697 main.go:134] libmachine: () Calling .GetVersion
	I0114 11:00:20.633289   23697 main.go:134] libmachine: Using API Version  1
	I0114 11:00:20.633304   23697 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 11:00:20.633597   23697 main.go:134] libmachine: () Calling .GetMachineName
	I0114 11:00:20.633793   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetMachineName
	I0114 11:00:20.633977   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:20.634100   23697 start.go:159] libmachine.API.Create for "multinode-103057-m03" (driver="kvm2")
	I0114 11:00:20.634127   23697 client.go:168] LocalClient.Create starting
	I0114 11:00:20.634156   23697 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca.pem
	I0114 11:00:20.634186   23697 main.go:134] libmachine: Decoding PEM data...
	I0114 11:00:20.634199   23697 main.go:134] libmachine: Parsing certificate...
	I0114 11:00:20.634258   23697 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15642-4002/.minikube/certs/cert.pem
	I0114 11:00:20.634271   23697 main.go:134] libmachine: Decoding PEM data...
	I0114 11:00:20.634282   23697 main.go:134] libmachine: Parsing certificate...
	I0114 11:00:20.634303   23697 main.go:134] libmachine: Running pre-create checks...
	I0114 11:00:20.634309   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .PreCreateCheck
	I0114 11:00:20.634633   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetConfigRaw
	I0114 11:00:20.635014   23697 main.go:134] libmachine: Creating machine...
	I0114 11:00:20.635022   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .Create
	I0114 11:00:20.635141   23697 main.go:134] libmachine: (multinode-103057-m03) Creating KVM machine...
	I0114 11:00:20.636389   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found existing default KVM network
	I0114 11:00:20.637291   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:20.637162   23721 network.go:215] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:5d:cc}}
	I0114 11:00:20.638057   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:20.637978   23721 network.go:277] reserving subnet 192.168.50.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.50.0:0xc000014600] misses:0}
	I0114 11:00:20.638087   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:20.638015   23721 network.go:210] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0114 11:00:20.643253   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | trying to create private KVM network mk-multinode-103057-m03 192.168.50.0/24...
	I0114 11:00:20.712241   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | private KVM network mk-multinode-103057-m03 192.168.50.0/24 created
	I0114 11:00:20.712275   23697 main.go:134] libmachine: (multinode-103057-m03) Setting up store path in /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03 ...
	I0114 11:00:20.712298   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:20.712202   23721 common.go:116] Making disk image using store path: /home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 11:00:20.712320   23697 main.go:134] libmachine: (multinode-103057-m03) Building disk image from file:///home/jenkins/minikube-integration/15642-4002/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso
	I0114 11:00:20.712397   23697 main.go:134] libmachine: (multinode-103057-m03) Downloading /home/jenkins/minikube-integration/15642-4002/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/15642-4002/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso...
	I0114 11:00:20.903077   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:20.902944   23721 common.go:123] Creating ssh key: /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa...
	I0114 11:00:21.103166   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:21.103037   23721 common.go:129] Creating raw disk image: /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/multinode-103057-m03.rawdisk...
	I0114 11:00:21.103187   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Writing magic tar header
	I0114 11:00:21.103199   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Writing SSH key tar header
	I0114 11:00:21.103217   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:21.103141   23721 common.go:143] Fixing permissions on /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03 ...
	I0114 11:00:21.103236   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03
	I0114 11:00:21.103284   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03 (perms=drwx------)
	I0114 11:00:21.103304   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins/minikube-integration/15642-4002/.minikube/machines (perms=drwxrwxr-x)
	I0114 11:00:21.103312   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15642-4002/.minikube/machines
	I0114 11:00:21.103333   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins/minikube-integration/15642-4002/.minikube (perms=drwxr-xr-x)
	I0114 11:00:21.103344   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins/minikube-integration/15642-4002 (perms=drwxrwxr-x)
	I0114 11:00:21.103355   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 11:00:21.103367   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/15642-4002
	I0114 11:00:21.103376   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0114 11:00:21.103386   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0114 11:00:21.103397   23697 main.go:134] libmachine: (multinode-103057-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0114 11:00:21.103404   23697 main.go:134] libmachine: (multinode-103057-m03) Creating domain...
	I0114 11:00:21.103412   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home/jenkins
	I0114 11:00:21.103422   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Checking permissions on dir: /home
	I0114 11:00:21.103430   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Skipping /home - not owner
	I0114 11:00:21.104501   23697 main.go:134] libmachine: (multinode-103057-m03) define libvirt domain using xml: 
	I0114 11:00:21.104520   23697 main.go:134] libmachine: (multinode-103057-m03) <domain type='kvm'>
	I0114 11:00:21.104527   23697 main.go:134] libmachine: (multinode-103057-m03)   <name>multinode-103057-m03</name>
	I0114 11:00:21.104531   23697 main.go:134] libmachine: (multinode-103057-m03)   <memory unit='MiB'>6000</memory>
	I0114 11:00:21.104538   23697 main.go:134] libmachine: (multinode-103057-m03)   <vcpu>2</vcpu>
	I0114 11:00:21.104543   23697 main.go:134] libmachine: (multinode-103057-m03)   <features>
	I0114 11:00:21.104548   23697 main.go:134] libmachine: (multinode-103057-m03)     <acpi/>
	I0114 11:00:21.104552   23697 main.go:134] libmachine: (multinode-103057-m03)     <apic/>
	I0114 11:00:21.104557   23697 main.go:134] libmachine: (multinode-103057-m03)     <pae/>
	I0114 11:00:21.104562   23697 main.go:134] libmachine: (multinode-103057-m03)     
	I0114 11:00:21.104566   23697 main.go:134] libmachine: (multinode-103057-m03)   </features>
	I0114 11:00:21.104571   23697 main.go:134] libmachine: (multinode-103057-m03)   <cpu mode='host-passthrough'>
	I0114 11:00:21.104576   23697 main.go:134] libmachine: (multinode-103057-m03)   
	I0114 11:00:21.104587   23697 main.go:134] libmachine: (multinode-103057-m03)   </cpu>
	I0114 11:00:21.104592   23697 main.go:134] libmachine: (multinode-103057-m03)   <os>
	I0114 11:00:21.104600   23697 main.go:134] libmachine: (multinode-103057-m03)     <type>hvm</type>
	I0114 11:00:21.104605   23697 main.go:134] libmachine: (multinode-103057-m03)     <boot dev='cdrom'/>
	I0114 11:00:21.104609   23697 main.go:134] libmachine: (multinode-103057-m03)     <boot dev='hd'/>
	I0114 11:00:21.104615   23697 main.go:134] libmachine: (multinode-103057-m03)     <bootmenu enable='no'/>
	I0114 11:00:21.104619   23697 main.go:134] libmachine: (multinode-103057-m03)   </os>
	I0114 11:00:21.104624   23697 main.go:134] libmachine: (multinode-103057-m03)   <devices>
	I0114 11:00:21.104628   23697 main.go:134] libmachine: (multinode-103057-m03)     <disk type='file' device='cdrom'>
	I0114 11:00:21.104636   23697 main.go:134] libmachine: (multinode-103057-m03)       <source file='/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/boot2docker.iso'/>
	I0114 11:00:21.104641   23697 main.go:134] libmachine: (multinode-103057-m03)       <target dev='hdc' bus='scsi'/>
	I0114 11:00:21.104646   23697 main.go:134] libmachine: (multinode-103057-m03)       <readonly/>
	I0114 11:00:21.104650   23697 main.go:134] libmachine: (multinode-103057-m03)     </disk>
	I0114 11:00:21.104655   23697 main.go:134] libmachine: (multinode-103057-m03)     <disk type='file' device='disk'>
	I0114 11:00:21.104661   23697 main.go:134] libmachine: (multinode-103057-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0114 11:00:21.104669   23697 main.go:134] libmachine: (multinode-103057-m03)       <source file='/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/multinode-103057-m03.rawdisk'/>
	I0114 11:00:21.104678   23697 main.go:134] libmachine: (multinode-103057-m03)       <target dev='hda' bus='virtio'/>
	I0114 11:00:21.104686   23697 main.go:134] libmachine: (multinode-103057-m03)     </disk>
	I0114 11:00:21.104690   23697 main.go:134] libmachine: (multinode-103057-m03)     <interface type='network'>
	I0114 11:00:21.104696   23697 main.go:134] libmachine: (multinode-103057-m03)       <source network='mk-multinode-103057-m03'/>
	I0114 11:00:21.104701   23697 main.go:134] libmachine: (multinode-103057-m03)       <model type='virtio'/>
	I0114 11:00:21.104706   23697 main.go:134] libmachine: (multinode-103057-m03)     </interface>
	I0114 11:00:21.104710   23697 main.go:134] libmachine: (multinode-103057-m03)     <interface type='network'>
	I0114 11:00:21.104716   23697 main.go:134] libmachine: (multinode-103057-m03)       <source network='default'/>
	I0114 11:00:21.104720   23697 main.go:134] libmachine: (multinode-103057-m03)       <model type='virtio'/>
	I0114 11:00:21.104725   23697 main.go:134] libmachine: (multinode-103057-m03)     </interface>
	I0114 11:00:21.104732   23697 main.go:134] libmachine: (multinode-103057-m03)     <serial type='pty'>
	I0114 11:00:21.104737   23697 main.go:134] libmachine: (multinode-103057-m03)       <target port='0'/>
	I0114 11:00:21.104741   23697 main.go:134] libmachine: (multinode-103057-m03)     </serial>
	I0114 11:00:21.104746   23697 main.go:134] libmachine: (multinode-103057-m03)     <console type='pty'>
	I0114 11:00:21.104751   23697 main.go:134] libmachine: (multinode-103057-m03)       <target type='serial' port='0'/>
	I0114 11:00:21.104756   23697 main.go:134] libmachine: (multinode-103057-m03)     </console>
	I0114 11:00:21.104760   23697 main.go:134] libmachine: (multinode-103057-m03)     <rng model='virtio'>
	I0114 11:00:21.104766   23697 main.go:134] libmachine: (multinode-103057-m03)       <backend model='random'>/dev/random</backend>
	I0114 11:00:21.104770   23697 main.go:134] libmachine: (multinode-103057-m03)     </rng>
	I0114 11:00:21.104775   23697 main.go:134] libmachine: (multinode-103057-m03)     
	I0114 11:00:21.104778   23697 main.go:134] libmachine: (multinode-103057-m03)     
	I0114 11:00:21.104784   23697 main.go:134] libmachine: (multinode-103057-m03)   </devices>
	I0114 11:00:21.104788   23697 main.go:134] libmachine: (multinode-103057-m03) </domain>
	I0114 11:00:21.104794   23697 main.go:134] libmachine: (multinode-103057-m03) 
	I0114 11:00:21.109482   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:5d:7c:34 in network default
	I0114 11:00:21.110020   23697 main.go:134] libmachine: (multinode-103057-m03) Ensuring networks are active...
	I0114 11:00:21.110031   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:21.110682   23697 main.go:134] libmachine: (multinode-103057-m03) Ensuring network default is active
	I0114 11:00:21.110935   23697 main.go:134] libmachine: (multinode-103057-m03) Ensuring network mk-multinode-103057-m03 is active
	I0114 11:00:21.111366   23697 main.go:134] libmachine: (multinode-103057-m03) Getting domain xml...
	I0114 11:00:21.112026   23697 main.go:134] libmachine: (multinode-103057-m03) Creating domain...
	I0114 11:00:22.361012   23697 main.go:134] libmachine: (multinode-103057-m03) Waiting to get IP...
	I0114 11:00:22.361841   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:22.362246   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:22.362294   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:22.362217   23721 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0114 11:00:22.626656   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:22.627046   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:22.627068   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:22.626991   23721 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0114 11:00:23.009591   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:23.009993   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:23.010043   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:23.009968   23721 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0114 11:00:23.434496   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:23.434955   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:23.434971   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:23.434923   23721 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0114 11:00:23.909336   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:23.909843   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:23.909867   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:23.909783   23721 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0114 11:00:24.499031   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:24.499525   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:24.499551   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:24.499463   23721 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0114 11:00:25.335739   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:25.336210   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:25.336236   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:25.336160   23721 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0114 11:00:26.084539   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:26.084927   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:26.084950   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:26.084881   23721 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0114 11:00:27.074095   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:27.074545   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:27.074563   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:27.074483   23721 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0114 11:00:28.265824   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:28.266289   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:28.266306   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:28.266239   23721 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0114 11:00:29.946215   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:29.946648   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:29.946668   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:29.946600   23721 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0114 11:00:32.295091   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:32.295584   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:32.295611   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:32.295541   23721 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0114 11:00:35.663392   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:35.663777   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:35.663811   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:35.663710   23721 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0114 11:00:38.785069   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:38.785455   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find current IP address of domain multinode-103057-m03 in network mk-multinode-103057-m03
	I0114 11:00:38.785471   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | I0114 11:00:38.785413   23721 retry.go:31] will retry after 4.276119362s: waiting for machine to come up
	I0114 11:00:43.063550   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.063957   23697 main.go:134] libmachine: (multinode-103057-m03) Found IP for machine: 192.168.50.34
	I0114 11:00:43.063976   23697 main.go:134] libmachine: (multinode-103057-m03) Reserving static IP address...
	I0114 11:00:43.063989   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has current primary IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.064353   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | unable to find host DHCP lease matching {name: "multinode-103057-m03", mac: "52:54:00:67:c8:9a", ip: "192.168.50.34"} in network mk-multinode-103057-m03
	I0114 11:00:43.135928   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Getting to WaitForSSH function...
	I0114 11:00:43.135952   23697 main.go:134] libmachine: (multinode-103057-m03) Reserved static IP address: 192.168.50.34
	I0114 11:00:43.135963   23697 main.go:134] libmachine: (multinode-103057-m03) Waiting for SSH to be available...
	I0114 11:00:43.138422   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.138718   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.138744   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.138823   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Using SSH client type: external
	I0114 11:00:43.138846   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa (-rw-------)
	I0114 11:00:43.138882   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0114 11:00:43.138894   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | About to run SSH command:
	I0114 11:00:43.138914   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | exit 0
	I0114 11:00:43.231057   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | SSH cmd err, output: <nil>: 
	I0114 11:00:43.231312   23697 main.go:134] libmachine: (multinode-103057-m03) KVM machine creation complete!
	I0114 11:00:43.231652   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetConfigRaw
	I0114 11:00:43.232150   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:43.232341   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:43.232507   23697 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0114 11:00:43.232520   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetState
	I0114 11:00:43.233811   23697 main.go:134] libmachine: Detecting operating system of created instance...
	I0114 11:00:43.233819   23697 main.go:134] libmachine: Waiting for SSH to be available...
	I0114 11:00:43.233834   23697 main.go:134] libmachine: Getting to WaitForSSH function...
	I0114 11:00:43.233839   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:43.236327   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.236683   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.236702   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.236832   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:43.236993   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.237159   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.237268   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:43.237408   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:43.237622   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:43.237631   23697 main.go:134] libmachine: About to run SSH command:
	exit 0
	I0114 11:00:43.346354   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 11:00:43.346369   23697 main.go:134] libmachine: Detecting the provisioner...
	I0114 11:00:43.346378   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:43.349155   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.349510   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.349537   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.349660   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:43.349851   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.349990   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.350158   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:43.350295   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:43.350448   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:43.350458   23697 main.go:134] libmachine: About to run SSH command:
	cat /etc/os-release
	I0114 11:00:43.459825   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g5c46c87-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0114 11:00:43.459909   23697 main.go:134] libmachine: found compatible host: buildroot
	I0114 11:00:43.459919   23697 main.go:134] libmachine: Provisioning with buildroot...
	I0114 11:00:43.459931   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetMachineName
	I0114 11:00:43.460194   23697 buildroot.go:166] provisioning hostname "multinode-103057-m03"
	I0114 11:00:43.460213   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetMachineName
	I0114 11:00:43.460386   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:43.462859   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.463132   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.463157   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.463313   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:43.463477   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.463639   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.463759   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:43.463907   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:43.464043   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:43.464051   23697 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-103057-m03 && echo "multinode-103057-m03" | sudo tee /etc/hostname
	I0114 11:00:43.582529   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-103057-m03
	
	I0114 11:00:43.582546   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:43.585207   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.585553   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.585582   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.585723   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:43.585909   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.586056   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:43.586187   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:43.586343   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:43.586492   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:43.586504   23697 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-103057-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-103057-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-103057-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 11:00:43.701864   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 11:00:43.701879   23697 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15642-4002/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-4002/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-4002/.minikube}
	I0114 11:00:43.701893   23697 buildroot.go:174] setting up certificates
	I0114 11:00:43.701899   23697 provision.go:83] configureAuth start
	I0114 11:00:43.701905   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetMachineName
	I0114 11:00:43.702169   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetIP
	I0114 11:00:43.704791   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.705109   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.705128   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.705235   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:43.707360   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.707620   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:43.707644   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:43.707739   23697 provision.go:138] copyHostCerts
	I0114 11:00:43.707783   23697 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4002/.minikube/ca.pem, removing ...
	I0114 11:00:43.707789   23697 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4002/.minikube/ca.pem
	I0114 11:00:43.707852   23697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-4002/.minikube/ca.pem (1078 bytes)
	I0114 11:00:43.707939   23697 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4002/.minikube/cert.pem, removing ...
	I0114 11:00:43.707942   23697 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4002/.minikube/cert.pem
	I0114 11:00:43.707964   23697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4002/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-4002/.minikube/cert.pem (1123 bytes)
	I0114 11:00:43.708000   23697 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-4002/.minikube/key.pem, removing ...
	I0114 11:00:43.708003   23697 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-4002/.minikube/key.pem
	I0114 11:00:43.708020   23697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-4002/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-4002/.minikube/key.pem (1675 bytes)
	I0114 11:00:43.708075   23697 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-4002/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca-key.pem org=jenkins.multinode-103057-m03 san=[192.168.50.34 192.168.50.34 localhost 127.0.0.1 minikube multinode-103057-m03]
	I0114 11:00:44.051801   23697 provision.go:172] copyRemoteCerts
	I0114 11:00:44.051845   23697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 11:00:44.051866   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:44.054486   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.054816   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:44.054847   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.055021   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:44.055198   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.055327   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:44.055467   23697 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa Username:docker}
	I0114 11:00:44.140637   23697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4002/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 11:00:44.166006   23697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4002/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0114 11:00:44.186944   23697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4002/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 11:00:44.207991   23697 provision.go:86] duration metric: configureAuth took 506.081276ms
	I0114 11:00:44.208010   23697 buildroot.go:189] setting minikube options for container-runtime
	I0114 11:00:44.208201   23697 config.go:180] Loaded profile config "multinode-103057-m03": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 11:00:44.208220   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:44.208486   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:44.210852   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.211164   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:44.211180   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.211387   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:44.211579   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.211720   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.211848   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:44.211980   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:44.212120   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:44.212127   23697 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0114 11:00:44.324762   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0114 11:00:44.324773   23697 buildroot.go:70] root file system type: tmpfs
	I0114 11:00:44.324926   23697 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0114 11:00:44.324943   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:44.327725   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.328026   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:44.328063   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.328203   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:44.328374   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.328528   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.328659   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:44.328775   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:44.328922   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:44.328973   23697 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0114 11:00:44.451387   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0114 11:00:44.451406   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:44.454058   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.454408   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:44.454424   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:44.454582   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:44.454767   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.454923   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:44.455013   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:44.455157   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:44.455272   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:44.455283   23697 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0114 11:00:45.184034   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0114 11:00:45.184054   23697 main.go:134] libmachine: Checking connection to Docker...
	I0114 11:00:45.184065   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetURL
	I0114 11:00:45.185462   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | Using libvirt version 6000000
	I0114 11:00:45.187894   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.188259   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.188283   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.188479   23697 main.go:134] libmachine: Docker is up and running!
	I0114 11:00:45.188486   23697 main.go:134] libmachine: Reticulating splines...
	I0114 11:00:45.188490   23697 client.go:171] LocalClient.Create took 24.554358524s
	I0114 11:00:45.188508   23697 start.go:167] duration metric: libmachine.API.Create for "multinode-103057-m03" took 24.554406833s
	I0114 11:00:45.188516   23697 start.go:300] post-start starting for "multinode-103057-m03" (driver="kvm2")
	I0114 11:00:45.188522   23697 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 11:00:45.188535   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:45.188783   23697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 11:00:45.188802   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:45.191117   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.191454   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.191480   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.191644   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:45.191846   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:45.192018   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:45.192170   23697 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa Username:docker}
	I0114 11:00:45.277280   23697 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 11:00:45.281429   23697 info.go:137] Remote host: Buildroot 2021.02.12
	I0114 11:00:45.281461   23697 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-4002/.minikube/addons for local assets ...
	I0114 11:00:45.281529   23697 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-4002/.minikube/files for local assets ...
	I0114 11:00:45.281615   23697 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-4002/.minikube/files/etc/ssl/certs/108512.pem -> 108512.pem in /etc/ssl/certs
	I0114 11:00:45.281696   23697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 11:00:45.290964   23697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4002/.minikube/files/etc/ssl/certs/108512.pem --> /etc/ssl/certs/108512.pem (1708 bytes)
	I0114 11:00:45.312083   23697 start.go:303] post-start completed in 123.551673ms
	I0114 11:00:45.312126   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetConfigRaw
	I0114 11:00:45.312730   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetIP
	I0114 11:00:45.315454   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.315804   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.315823   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.316032   23697 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03/config.json ...
	I0114 11:00:45.316202   23697 start.go:128] duration metric: createHost completed in 24.700318034s
	I0114 11:00:45.316217   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:45.318318   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.318691   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.318715   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.318855   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:45.319059   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:45.319194   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:45.319299   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:45.319421   23697 main.go:134] libmachine: Using SSH client type: native
	I0114 11:00:45.319564   23697 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I0114 11:00:45.319570   23697 main.go:134] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0114 11:00:45.432665   23697 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673694045.408082123
	
	I0114 11:00:45.432677   23697 fix.go:207] guest clock: 1673694045.408082123
	I0114 11:00:45.432687   23697 fix.go:220] Guest: 2023-01-14 11:00:45.408082123 +0000 UTC Remote: 2023-01-14 11:00:45.316206885 +0000 UTC m=+24.832570786 (delta=91.875238ms)
	I0114 11:00:45.432708   23697 fix.go:191] guest clock delta is within tolerance: 91.875238ms
	I0114 11:00:45.432734   23697 start.go:83] releasing machines lock for "multinode-103057-m03", held for 24.816904674s
	I0114 11:00:45.432769   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:45.432988   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetIP
	I0114 11:00:45.435355   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.435753   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.435774   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.435927   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:45.436420   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:45.436566   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .DriverName
	I0114 11:00:45.436636   23697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0114 11:00:45.436669   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:45.436733   23697 ssh_runner.go:195] Run: cat /version.json
	I0114 11:00:45.436753   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHHostname
	I0114 11:00:45.439067   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.439891   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.439913   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.440055   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.440244   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:45.440428   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:45.440487   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:45.440514   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:45.440574   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:45.440686   23697 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa Username:docker}
	I0114 11:00:45.440736   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHPort
	I0114 11:00:45.440829   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHKeyPath
	I0114 11:00:45.440961   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetSSHUsername
	I0114 11:00:45.441077   23697 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m03/id_rsa Username:docker}
	I0114 11:00:45.547410   23697 ssh_runner.go:195] Run: systemctl --version
	I0114 11:00:45.552843   23697 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 11:00:45.552917   23697 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 11:00:45.578701   23697 docker.go:613] Got preloaded images: 
	I0114 11:00:45.578710   23697 docker.go:619] registry.k8s.io/kube-apiserver:v1.25.3 wasn't preloaded
	I0114 11:00:45.578750   23697 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0114 11:00:45.587641   23697 ssh_runner.go:195] Run: which lz4
	I0114 11:00:45.590992   23697 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0114 11:00:45.595245   23697 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0114 11:00:45.595262   23697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404166592 bytes)
	I0114 11:00:47.032792   23697 docker.go:577] Took 1.441815 seconds to copy over tarball
	I0114 11:00:47.032857   23697 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 11:00:49.353003   23697 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.320127402s)
	I0114 11:00:49.353017   23697 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 11:00:49.389200   23697 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0114 11:00:49.398414   23697 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0114 11:00:49.413295   23697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 11:00:49.522016   23697 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 11:00:53.103653   23697 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.581608221s)
	I0114 11:00:53.103725   23697 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0114 11:00:53.122121   23697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0114 11:00:53.135571   23697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 11:00:53.147452   23697 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 11:00:53.177880   23697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 11:00:53.189871   23697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 11:00:53.209322   23697 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0114 11:00:53.309079   23697 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0114 11:00:53.420424   23697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 11:00:53.538449   23697 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0114 11:00:54.896545   23697 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.358072625s)
	I0114 11:00:54.896592   23697 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0114 11:00:55.010414   23697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 11:00:55.106291   23697 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0114 11:00:55.122418   23697 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0114 11:00:55.122464   23697 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0114 11:00:55.127701   23697 start.go:472] Will wait 60s for crictl version
	I0114 11:00:55.127761   23697 ssh_runner.go:195] Run: which crictl
	I0114 11:00:55.130998   23697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 11:00:55.256352   23697 start.go:488] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.21
	RuntimeApiVersion:  1.41.0
	I0114 11:00:55.256404   23697 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 11:00:55.283929   23697 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0114 11:00:55.313212   23697 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
	I0114 11:00:55.313248   23697 main.go:134] libmachine: (multinode-103057-m03) Calling .GetIP
	I0114 11:00:55.316152   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:55.316666   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:c8:9a", ip: ""} in network mk-multinode-103057-m03: {Iface:virbr2 ExpiryTime:2023-01-14 12:00:34 +0000 UTC Type:0 Mac:52:54:00:67:c8:9a Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:multinode-103057-m03 Clientid:01:52:54:00:67:c8:9a}
	I0114 11:00:55.316690   23697 main.go:134] libmachine: (multinode-103057-m03) DBG | domain multinode-103057-m03 has defined IP address 192.168.50.34 and MAC address 52:54:00:67:c8:9a in network mk-multinode-103057-m03
	I0114 11:00:55.316887   23697 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0114 11:00:55.320568   23697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 11:00:55.331614   23697 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 11:00:55.331654   23697 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 11:00:55.353505   23697 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 11:00:55.353517   23697 docker.go:543] Images already preloaded, skipping extraction
	I0114 11:00:55.353564   23697 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0114 11:00:55.374936   23697 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0114 11:00:55.374951   23697 cache_images.go:84] Images are preloaded, skipping loading
	I0114 11:00:55.374996   23697 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0114 11:00:55.407531   23697 cni.go:95] Creating CNI manager for ""
	I0114 11:00:55.407541   23697 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 11:00:55.407550   23697 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 11:00:55.407561   23697 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.34 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-103057-m03 NodeName:multinode-103057-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.34 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 11:00:55.407669   23697 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-103057-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 11:00:55.407754   23697 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-103057-m03 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.34 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-103057-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 11:00:55.407794   23697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I0114 11:00:55.417742   23697 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 11:00:55.417798   23697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 11:00:55.426906   23697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes)
	I0114 11:00:55.442467   23697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 11:00:55.458287   23697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2045 bytes)
	I0114 11:00:55.474229   23697 ssh_runner.go:195] Run: grep 192.168.50.34	control-plane.minikube.internal$ /etc/hosts
	I0114 11:00:55.477522   23697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0114 11:00:55.487933   23697 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03 for IP: 192.168.50.34
	I0114 11:00:55.488051   23697 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-4002/.minikube/ca.key
	I0114 11:00:55.488101   23697 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-4002/.minikube/proxy-client-ca.key
	I0114 11:00:55.488152   23697 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03/client.key
	I0114 11:00:55.488162   23697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/multinode-103057-m03/client.crt with IP's: []
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-14 10:50:35 UTC, ends at Sat 2023-01-14 11:00:58 UTC. --
	Jan 14 10:51:02 multinode-103057 dockerd[839]: time="2023-01-14T10:51:02.671235632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:51:02 multinode-103057 dockerd[839]: time="2023-01-14T10:51:02.671407987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:51:02 multinode-103057 dockerd[839]: time="2023-01-14T10:51:02.671539533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:51:02 multinode-103057 dockerd[839]: time="2023-01-14T10:51:02.672020777Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8682b3485a4681ea38d05cb473e484afa5b245f875b129c7d5c3fe0a8b0ea4e0 pid=1984 runtime=io.containerd.runc.v2
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.350086344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.350232761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.350260094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.352576066Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e99958f04e2d13deeff952929a4d27ec73ca8a36b9b996e980024f553c2e9fbb pid=2084 runtime=io.containerd.runc.v2
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.781021504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.781095458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.781107759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:51:03 multinode-103057 dockerd[839]: time="2023-01-14T10:51:03.781352986Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b732794175f41f73c5e49c2da8ce6a9c2f00972a2462644fdd9a81ff447a9b11 pid=2228 runtime=io.containerd.runc.v2
	Jan 14 10:51:06 multinode-103057 dockerd[839]: time="2023-01-14T10:51:06.917424976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:51:06 multinode-103057 dockerd[839]: time="2023-01-14T10:51:06.917903681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:51:06 multinode-103057 dockerd[839]: time="2023-01-14T10:51:06.917970479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:51:06 multinode-103057 dockerd[839]: time="2023-01-14T10:51:06.918266996Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9b73ac41fcbda95a9d0aba528ea09ba243f95978b754ba3887ee000b0da414a2 pid=2351 runtime=io.containerd.runc.v2
	Jan 14 10:51:34 multinode-103057 dockerd[832]: time="2023-01-14T10:51:34.000875842Z" level=info msg="ignoring event" container=b732794175f41f73c5e49c2da8ce6a9c2f00972a2462644fdd9a81ff447a9b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 14 10:51:34 multinode-103057 dockerd[839]: time="2023-01-14T10:51:34.001440053Z" level=info msg="shim disconnected" id=b732794175f41f73c5e49c2da8ce6a9c2f00972a2462644fdd9a81ff447a9b11
	Jan 14 10:51:34 multinode-103057 dockerd[839]: time="2023-01-14T10:51:34.001509714Z" level=warning msg="cleaning up after shim disconnected" id=b732794175f41f73c5e49c2da8ce6a9c2f00972a2462644fdd9a81ff447a9b11 namespace=moby
	Jan 14 10:51:34 multinode-103057 dockerd[839]: time="2023-01-14T10:51:34.001523246Z" level=info msg="cleaning up dead shim"
	Jan 14 10:51:34 multinode-103057 dockerd[839]: time="2023-01-14T10:51:34.012351232Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:51:34Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2764 runtime=io.containerd.runc.v2\n"
	Jan 14 10:51:49 multinode-103057 dockerd[839]: time="2023-01-14T10:51:49.868954973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:51:49 multinode-103057 dockerd[839]: time="2023-01-14T10:51:49.869064186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:51:49 multinode-103057 dockerd[839]: time="2023-01-14T10:51:49.869080745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:51:49 multinode-103057 dockerd[839]: time="2023-01-14T10:51:49.869610475Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/adeb8272cf5887ac992cef1d3bb4a466eebae94d2a5b251f46be0f15163d4cef pid=2939 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	adeb8272cf588       6e38f40d628db                                                                                         9 minutes ago       Running             storage-provisioner       4                   8682b3485a468
	9b73ac41fcbda       d6e3e26021b60                                                                                         9 minutes ago       Running             kindnet-cni               2                   e99958f04e2d1
	b732794175f41       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       3                   8682b3485a468
	bc96886d2695f       beaaf00edd38a                                                                                         9 minutes ago       Running             kube-proxy                2                   076c1db5ffed9
	42b52464b2baf       6039992312758                                                                                         10 minutes ago      Running             kube-controller-manager   2                   c9d146a517b29
	7a33768a46f3b       6d23ec0e8b87e                                                                                         10 minutes ago      Running             kube-scheduler            2                   170fc9743d2fb
	4e2ab35f1a7fa       a8a176a5d5d69                                                                                         10 minutes ago      Running             etcd                      2                   b99c9d67987a1
	c97addd47810d       0346dbd74bcb9                                                                                         10 minutes ago      Running             kube-apiserver            2                   ec8ea9403de46
	5d211af76877e       d6e3e26021b60                                                                                         24 minutes ago      Exited              kindnet-cni               1                   170d7923cde80
	976ce90631cd1       beaaf00edd38a                                                                                         24 minutes ago      Exited              kube-proxy                1                   574f85f1b033a
	2fda1c35a9668       6039992312758                                                                                         24 minutes ago      Exited              kube-controller-manager   1                   460b0083fafa0
	03572ccf16f37       a8a176a5d5d69                                                                                         24 minutes ago      Exited              etcd                      1                   b0dcb489a29f5
	edd0a3582db7f       6d23ec0e8b87e                                                                                         24 minutes ago      Exited              kube-scheduler            1                   d70788b0fa748
	0de6b5d8b8012       0346dbd74bcb9                                                                                         24 minutes ago      Exited              kube-apiserver            1                   3eda0858e9acb
	3b74777309491       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   27 minutes ago      Exited              busybox                   0                   57ad8979867fb
	c0cf3b78b3dcf       5185b96f0becf                                                                                         28 minutes ago      Exited              coredns                   0                   1bafa9987048d
	
	* 
	* ==> coredns [c0cf3b78b3dc] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-103057
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103057
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=multinode-103057
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_31_51_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103057
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 11:00:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:56:27 +0000   Sat, 14 Jan 2023 10:31:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:56:27 +0000   Sat, 14 Jan 2023 10:31:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:56:27 +0000   Sat, 14 Jan 2023 10:31:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:56:27 +0000   Sat, 14 Jan 2023 10:51:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    multinode-103057
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b21a41e9add4bc59a8067b780e629bf
	  System UUID:                7b21a41e-9add-4bc5-9a80-67b780e629bf
	  Boot ID:                    b4eae075-3a86-4bb1-a8f3-541eeab7754c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-kllnh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-565d847f94-hsdq9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-multinode-103057                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-gcqpb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-multinode-103057             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-multinode-103057    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-tfbrx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-multinode-103057             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 9m54s              kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node multinode-103057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node multinode-103057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node multinode-103057 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node multinode-103057 event: Registered Node multinode-103057 in Controller
	  Normal  NodeReady                28m                kubelet          Node multinode-103057 status is now: NodeReady
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-103057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-103057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-103057 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-103057 event: Registered Node multinode-103057 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node multinode-103057 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node multinode-103057 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node multinode-103057 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m45s              node-controller  Node multinode-103057 event: Registered Node multinode-103057 in Controller
	
	
	Name:               multinode-103057-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-103057-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:55:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-103057-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 11:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:56:17 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:56:17 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:56:17 +0000   Sat, 14 Jan 2023 10:55:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:56:17 +0000   Sat, 14 Jan 2023 10:56:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.160
	  Hostname:    multinode-103057-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 528f880c267040c4866119444f0ae438
	  System UUID:                528f880c-2670-40c4-8661-19444f0ae438
	  Boot ID:                    2092369e-ee1e-4476-96b9-4f81b3eaa6da
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.21
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-pr2rn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-65hvf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-proxy-rf4rl            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 19m                  kube-proxy  
	  Normal  Starting                 27m                  kube-proxy  
	  Normal  Starting                 4m58s                kube-proxy  
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet     Node multinode-103057-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)    kubelet     Node multinode-103057-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)    kubelet     Node multinode-103057-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)    kubelet     Node multinode-103057-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet     Node multinode-103057-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                  kubelet     Starting kubelet.
	  Normal  NodeReady                19m                  kubelet     Node multinode-103057-m02 status is now: NodeReady
	  Normal  Starting                 5m1s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m1s (x2 over 5m1s)  kubelet     Node multinode-103057-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x2 over 5m1s)  kubelet     Node multinode-103057-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x2 over 5m1s)  kubelet     Node multinode-103057-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m41s                kubelet     Node multinode-103057-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jan14 10:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.065394] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.799885] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.435870] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.125168] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.549065] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.506893] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.101054] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +1.054247] systemd-fstab-generator[747]: Ignoring "noauto" for root device
	[  +0.283782] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +0.108273] systemd-fstab-generator[812]: Ignoring "noauto" for root device
	[  +0.102645] systemd-fstab-generator[823]: Ignoring "noauto" for root device
	[  +1.554517] systemd-fstab-generator[1000]: Ignoring "noauto" for root device
	[  +0.097890] systemd-fstab-generator[1011]: Ignoring "noauto" for root device
	[  +4.836388] systemd-fstab-generator[1211]: Ignoring "noauto" for root device
	[  +0.322455] kauditd_printk_skb: 67 callbacks suppressed
	[Jan14 10:51] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [03572ccf16f3] <==
	* {"level":"info","ts":"2023-01-14T10:36:15.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 switched to configuration voters=(6927141977540794101)"}
	{"level":"info","ts":"2023-01-14T10:36:15.031Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","added-peer-id":"602226ed500416f5","added-peer-peer-urls":["https://192.168.39.24:2380"]}
	{"level":"info","ts":"2023-01-14T10:36:15.032Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6c3e0d5efc74209","local-member-id":"602226ed500416f5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:36:15.032Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:36:15.039Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:36:15.042Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"602226ed500416f5","initial-advertise-peer-urls":["https://192.168.39.24:2380"],"listen-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:36:15.042Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:36:15.048Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2023-01-14T10:36:15.048Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 2"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:36:16.068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2023-01-14T10:36:16.070Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:multinode-103057 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:36:16.070Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:36:16.071Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:36:16.073Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2023-01-14T10:36:16.074Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:36:16.074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:36:16.074Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:46:16.100Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1113}
	{"level":"info","ts":"2023-01-14T10:46:16.122Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1113,"took":"20.9449ms"}
	
	* 
	* ==> etcd [4e2ab35f1a7f] <==
	* {"level":"info","ts":"2023-01-14T10:50:58.615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgPreVoteResp from 602226ed500416f5 at term 3"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became candidate at term 4"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 received MsgVoteResp from 602226ed500416f5 at term 4"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"602226ed500416f5 became leader at term 4"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 602226ed500416f5 elected leader 602226ed500416f5 at term 4"}
	{"level":"info","ts":"2023-01-14T10:50:58.616Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"602226ed500416f5","local-member-attributes":"{Name:multinode-103057 ClientURLs:[https://192.168.39.24:2379]}","request-path":"/0/members/602226ed500416f5/attributes","cluster-id":"6c3e0d5efc74209","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:50:58.617Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:50:58.618Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.24:2379"}
	{"level":"info","ts":"2023-01-14T10:50:58.618Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:50:58.619Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:50:58.619Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-14T10:50:58.621Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T11:00:51.809Z","caller":"traceutil/trace.go:171","msg":"trace[1821521562] linearizableReadLoop","detail":"{readStateIndex:2757; appliedIndex:2757; }","duration":"222.257777ms","start":"2023-01-14T11:00:51.587Z","end":"2023-01-14T11:00:51.809Z","steps":["trace[1821521562] 'read index received'  (duration: 222.251033ms)","trace[1821521562] 'applied index is now lower than readState.Index'  (duration: 5.991µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-14T11:00:51.938Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"350.489252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-14T11:00:51.938Z","caller":"traceutil/trace.go:171","msg":"trace[1343562505] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2352; }","duration":"350.834955ms","start":"2023-01-14T11:00:51.587Z","end":"2023-01-14T11:00:51.938Z","steps":["trace[1343562505] 'agreement among raft nodes before linearized reading'  (duration: 222.73508ms)","trace[1343562505] 'range keys from in-memory index tree'  (duration: 127.719815ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-14T11:00:51.938Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T11:00:51.587Z","time spent":"351.115297ms","remote":"127.0.0.1:51776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-01-14T11:00:52.476Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"293.981842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654375428676765151 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.24\" mod_revision:2345 > success:<request_put:<key:\"/registry/masterleases/192.168.39.24\" value_size:66 lease:1654375428676765149 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.24\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-01-14T11:00:52.477Z","caller":"traceutil/trace.go:171","msg":"trace[2070048910] transaction","detail":"{read_only:false; response_revision:2354; number_of_response:1; }","duration":"388.378836ms","start":"2023-01-14T11:00:52.088Z","end":"2023-01-14T11:00:52.477Z","steps":["trace[2070048910] 'process raft request'  (duration: 93.123063ms)","trace[2070048910] 'compare'  (duration: 293.766545ms)"],"step_count":2}
	{"level":"warn","ts":"2023-01-14T11:00:52.477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-01-14T11:00:52.088Z","time spent":"388.651364ms","remote":"127.0.0.1:51788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.24\" mod_revision:2345 > success:<request_put:<key:\"/registry/masterleases/192.168.39.24\" value_size:66 lease:1654375428676765149 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.24\" > >"}
	{"level":"warn","ts":"2023-01-14T11:00:52.827Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"128.022165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-01-14T11:00:52.827Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"239.863044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-01-14T11:00:52.827Z","caller":"traceutil/trace.go:171","msg":"trace[932196694] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2354; }","duration":"239.988915ms","start":"2023-01-14T11:00:52.587Z","end":"2023-01-14T11:00:52.827Z","steps":["trace[932196694] 'range keys from in-memory index tree'  (duration: 239.720407ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-14T11:00:52.827Z","caller":"traceutil/trace.go:171","msg":"trace[53563789] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:2354; }","duration":"128.078521ms","start":"2023-01-14T11:00:52.699Z","end":"2023-01-14T11:00:52.827Z","steps":["trace[53563789] 'count revisions from in-memory index tree'  (duration: 127.952951ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:00:58 up 10 min,  0 users,  load average: 0.77, 0.61, 0.32
	Linux multinode-103057 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0de6b5d8b801] <==
	* I0114 10:36:18.222158       1 naming_controller.go:291] Starting NamingConditionController
	I0114 10:36:18.222197       1 establishing_controller.go:76] Starting EstablishingController
	I0114 10:36:18.222209       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0114 10:36:18.222220       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0114 10:36:18.222233       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:36:18.222294       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:36:18.246133       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 10:36:18.319572       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0114 10:36:18.342033       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:36:18.384349       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:36:18.384407       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:36:18.398018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:36:18.398418       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:36:18.399646       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:36:18.399677       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:36:18.400017       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:36:18.953336       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:36:19.203668       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:36:20.963317       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:36:21.118233       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:36:21.136600       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:36:21.246623       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:36:21.252882       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:36:31.317207       1 controller.go:616] quota admission added evaluator for: endpoints
	I0114 10:36:31.349521       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [c97addd47810] <==
	* I0114 10:51:00.778031       1 establishing_controller.go:76] Starting EstablishingController
	I0114 10:51:00.778097       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0114 10:51:00.778133       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0114 10:51:00.778145       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:51:00.778163       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0114 10:51:00.778188       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0114 10:51:00.778242       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:51:00.778531       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 10:51:00.866771       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:51:00.870338       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:51:00.872309       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:51:00.873110       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I0114 10:51:00.878225       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0114 10:51:00.880174       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:51:00.885415       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:51:00.895939       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	E0114 10:51:00.897367       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:51:01.511792       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:51:01.779229       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:51:03.287570       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I0114 10:51:03.579002       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I0114 10:51:03.594957       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I0114 10:51:03.694939       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:51:03.761448       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:52:07.361250       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [2fda1c35a966] <==
	* I0114 10:36:31.806035       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:36:31.806128       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:36:31.832420       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:37:11.397733       1 event.go:294] "Event occurred" object="multinode-103057-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-103057-m02 status is now: NodeNotReady"
	W0114 10:37:11.398142       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m03 node
	I0114 10:37:11.414423       1 event.go:294] "Event occurred" object="kube-system/kindnet-65hvf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:37:11.424916       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-pr2rn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:37:11.440001       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-rf4rl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:37:11.458730       1 event.go:294] "Event occurred" object="multinode-103057-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-103057-m03 status is now: NodeNotReady"
	I0114 10:37:11.466240       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-vddcv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:37:11.482036       1 event.go:294] "Event occurred" object="kube-system/kindnet-j78n2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:41:03.888617       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-jkbv6"
	W0114 10:41:07.703563       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-103057-m02" does not exist
	I0114 10:41:07.706104       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-pr2rn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-pr2rn"
	I0114 10:41:07.715412       1 range_allocator.go:367] Set node multinode-103057-m02 PodCIDR to [10.244.1.0/24]
	W0114 10:41:17.854873       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	I0114 10:41:21.542558       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-pr2rn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-pr2rn"
	W0114 10:45:42.785463       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	W0114 10:45:43.624937       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-103057-m03" does not exist
	W0114 10:45:43.625117       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	I0114 10:45:43.632111       1 range_allocator.go:367] Set node multinode-103057-m03 PodCIDR to [10.244.2.0/24]
	W0114 10:46:03.961192       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	I0114 10:46:06.597122       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-jkbv6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-jkbv6"
	I0114 10:50:06.444722       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-d6ckl"
	W0114 10:50:08.455432       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	
	* 
	* ==> kube-controller-manager [42b52464b2ba] <==
	* I0114 10:51:13.464610       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:51:13.467626       1 event.go:294] "Event occurred" object="kube-system/kindnet-gcqpb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:51:13.516635       1 shared_informer.go:262] Caches are synced for PV protection
	I0114 10:51:13.529380       1 shared_informer.go:262] Caches are synced for attach detach
	I0114 10:51:13.588008       1 shared_informer.go:262] Caches are synced for persistent volume
	I0114 10:51:13.588981       1 shared_informer.go:262] Caches are synced for expand
	I0114 10:51:13.936323       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:51:13.936344       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:51:13.963581       1 shared_informer.go:262] Caches are synced for garbage collector
	W0114 10:51:21.206759       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	I0114 10:51:23.426778       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-hsdq9" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-hsdq9"
	I0114 10:51:23.427800       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-kllnh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-kllnh"
	I0114 10:51:23.429297       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0114 10:51:53.444947       1 event.go:294] "Event occurred" object="multinode-103057-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-103057-m02 status is now: NodeNotReady"
	I0114 10:51:53.452757       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-rf4rl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:51:53.467605       1 event.go:294] "Event occurred" object="kube-system/kindnet-65hvf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0114 10:51:53.479007       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-j78n2"
	I0114 10:51:53.487753       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-j78n2"
	I0114 10:51:53.487895       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-vddcv"
	I0114 10:51:53.501552       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-vddcv"
	W0114 10:55:57.432578       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-103057-m02" does not exist
	I0114 10:55:57.433077       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-pr2rn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-pr2rn"
	I0114 10:55:57.441629       1 range_allocator.go:367] Set node multinode-103057-m02 PodCIDR to [10.244.1.0/24]
	W0114 10:56:17.944394       1 topologycache.go:199] Can't get CPU or zone information for multinode-103057-m02 node
	I0114 10:56:18.519538       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-pr2rn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-pr2rn"
	
	* 
	* ==> kube-proxy [976ce90631cd] <==
	* I0114 10:36:23.651466       1 node.go:163] Successfully retrieved node IP: 192.168.39.24
	I0114 10:36:23.651990       1 server_others.go:138] "Detected node IP" address="192.168.39.24"
	I0114 10:36:23.652146       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:36:23.722652       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 10:36:23.722750       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:36:23.723669       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:36:23.724618       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:36:23.724649       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:36:23.726851       1 config.go:317] "Starting service config controller"
	I0114 10:36:23.726868       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:36:23.726900       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:36:23.726904       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:36:23.729320       1 config.go:444] "Starting node config controller"
	I0114 10:36:23.729407       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:36:23.827534       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:36:23.827614       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:36:23.830133       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [bc96886d2695] <==
	* I0114 10:51:03.558773       1 node.go:163] Successfully retrieved node IP: 192.168.39.24
	I0114 10:51:03.558848       1 server_others.go:138] "Detected node IP" address="192.168.39.24"
	I0114 10:51:03.558874       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:51:03.723624       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 10:51:03.723647       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:51:03.724170       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:51:03.726291       1 server.go:661] "Version info" version="v1.25.3"
	I0114 10:51:03.726484       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:51:03.729813       1 config.go:317] "Starting service config controller"
	I0114 10:51:03.730031       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:51:03.730159       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:51:03.730303       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:51:03.736275       1 config.go:444] "Starting node config controller"
	I0114 10:51:03.737534       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:51:03.836705       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:51:03.840770       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:51:03.841783       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7a33768a46f3] <==
	* I0114 10:50:56.694965       1 serving.go:348] Generated self-signed cert in-memory
	W0114 10:51:00.835242       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0114 10:51:00.835643       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0114 10:51:00.835884       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0114 10:51:00.836041       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0114 10:51:00.899656       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I0114 10:51:00.903309       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:51:00.910314       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:51:00.910631       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:51:00.911247       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:51:00.910820       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:51:01.011445       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0114 10:56:17.963743       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"busybox-65db55d5d6-d6ckl.173a27629f107ac8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4660feb7-916f-40a9-b193-3553101b0274", ResourceVersion:"1862", Generation:0, CreationTimestamp:time.Date(2023, time.January, 14, 10, 51, 56, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"events.k8s.io/v1", Time:time.Date(2023, time.January, 14, 10, 51, 56, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008131a0), Subresource:""}}}, EventTime:time.Date(2023, time.January, 14, 10, 51, 56, 50491609, time.Local), Series:(*v1.EventSeries)(0
xc000465500), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-multinode-103057", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"busybox-65db55d5d6-d6ckl", UID:"fd17a010-80f1-40b0-9968-585999639569", APIVersion:"v1", ResourceVersion:"1860", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedLastTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeprecatedCount:0}': 'Event "busybox-65db55d5d6-d6ckl.173a27629f107ac8" is invalid: series.count: Invalid value: "": should be at
least 2' (will not retry!)
	
	* 
	* ==> kube-scheduler [edd0a3582db7] <==
	* W0114 10:36:18.339036       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0114 10:36:18.339143       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0114 10:36:18.339250       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0114 10:36:18.339359       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0114 10:36:18.339611       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0114 10:36:18.340474       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0114 10:36:18.340517       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0114 10:36:18.342976       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0114 10:36:18.343133       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0114 10:36:18.343247       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0114 10:36:18.347626       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0114 10:36:18.347999       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0114 10:36:18.349481       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0114 10:36:18.349960       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0114 10:36:18.350167       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0114 10:36:18.350329       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0114 10:36:18.351425       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0114 10:36:18.351607       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0114 10:36:18.351893       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0114 10:36:18.351999       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0114 10:36:18.352031       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0114 10:36:18.352183       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0114 10:36:18.352292       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0114 10:36:18.352429       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0114 10:36:19.619742       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-14 10:50:35 UTC, ends at Sat 2023-01-14 11:00:58 UTC. --
	Jan 14 11:00:13 multinode-103057 kubelet[1217]: E0114 11:00:13.800113    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-kllnh" podUID=a2ad92b0-65a3-49bd-a1fd-03d949b65d98
	Jan 14 11:00:22 multinode-103057 kubelet[1217]: E0114 11:00:22.801161    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-hsdq9_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c"
	Jan 14 11:00:22 multinode-103057 kubelet[1217]: E0114 11:00:22.801314    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c}
	Jan 14 11:00:22 multinode-103057 kubelet[1217]: E0114 11:00:22.801418    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:22 multinode-103057 kubelet[1217]: E0114 11:00:22.802233    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-hsdq9" podUID=8d91fba7-20f5-424e-82d2-642355f95d9e
	Jan 14 11:00:24 multinode-103057 kubelet[1217]: E0114 11:00:24.799635    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-kllnh_default\" network: could not retrieve port mappings: key is not found" podSandboxID="57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354"
	Jan 14 11:00:24 multinode-103057 kubelet[1217]: E0114 11:00:24.799737    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354}
	Jan 14 11:00:24 multinode-103057 kubelet[1217]: E0114 11:00:24.799776    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:24 multinode-103057 kubelet[1217]: E0114 11:00:24.799798    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-kllnh" podUID=a2ad92b0-65a3-49bd-a1fd-03d949b65d98
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.800391    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-hsdq9_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c"
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.800450    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c}
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.800479    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.800501    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-hsdq9" podUID=8d91fba7-20f5-424e-82d2-642355f95d9e
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.801142    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-kllnh_default\" network: could not retrieve port mappings: key is not found" podSandboxID="57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354"
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.801187    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354}
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.801213    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:37 multinode-103057 kubelet[1217]: E0114 11:00:37.801253    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-kllnh" podUID=a2ad92b0-65a3-49bd-a1fd-03d949b65d98
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.802902    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-hsdq9_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c"
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.803650    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:1bafa9987048d3abf5e0a98ac9d54d979b89e519cf311fa83415f7730530a59c}
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.803906    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.804042    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d91fba7-20f5-424e-82d2-642355f95d9e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-hsdq9_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-hsdq9" podUID=8d91fba7-20f5-424e-82d2-642355f95d9e
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.806398    1217 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-kllnh_default\" network: could not retrieve port mappings: key is not found" podSandboxID="57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354"
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.806513    1217 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:57ad8979867fbaa8afc0f15b9484616a3ea5c38ad3434eaee78eb9bc01683354}
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.806605    1217 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\""
	Jan 14 11:00:50 multinode-103057 kubelet[1217]: E0114 11:00:50.806742    1217 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2ad92b0-65a3-49bd-a1fd-03d949b65d98\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-kllnh_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-kllnh" podUID=a2ad92b0-65a3-49bd-a1fd-03d949b65d98
	
	* 
	* ==> storage-provisioner [adeb8272cf58] <==
	* I0114 10:51:49.946384       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:51:49.964411       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:51:49.964654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0114 10:52:07.364596       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0114 10:52:07.365465       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5de7f63f-1bf9-452f-b4f1-74e045336838", APIVersion:"v1", ResourceVersion:"1868", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-103057_302fe8b5-c2dd-4338-a066-971507a5d824 became leader
	I0114 10:52:07.365657       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-103057_302fe8b5-c2dd-4338-a066-971507a5d824!
	I0114 10:52:07.468038       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-103057_302fe8b5-c2dd-4338-a066-971507a5d824!
	
	* 
	* ==> storage-provisioner [b732794175f4] <==
	* I0114 10:51:03.977228       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0114 10:51:33.981981       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-103057 -n multinode-103057
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-103057 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-d6ckl
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-103057 describe pod busybox-65db55d5d6-d6ckl
helpers_test.go:280: (dbg) kubectl --context multinode-103057 describe pod busybox-65db55d5d6-d6ckl:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-d6ckl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tpk52 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-tpk52:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  9m58s  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9m5s   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  9m3s   default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10m    default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10m    default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (39.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (59.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 11:22:41.039480   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:41.601697   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.253574427s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 11:22:51.279834   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.277677689s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.157865833s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.159298314s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 11:23:08.889052   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0114 11:23:11.760941   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.15219818s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.182722584s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.15452159s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (59.44s)
E0114 11:28:41.241503   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:46.917463   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:28:49.627715   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:28:53.985675   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:28:57.141029   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.146319   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.156581   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.176893   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.217162   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.297489   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.457923   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:57.778857   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:58.419548   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:28:59.700695   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:29:01.722125   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:29:02.261831   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:29:04.645695   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:29:07.068978   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 11:29:07.382588   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:29:17.227500   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:29:17.309728   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:29:17.623213   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:29:33.459259   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:29:38.103960   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/old-k8s-version-112123/client.crt: no such file or directory
E0114 11:29:42.682582   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:29:53.837895   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory

                                                
                                    

Test pass (277/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.43
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 14.19
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.58
20 TestOffline 97.04
22 TestAddons/Setup 150.41
24 TestAddons/parallel/Registry 18.38
25 TestAddons/parallel/Ingress 24.69
26 TestAddons/parallel/MetricsServer 5.72
27 TestAddons/parallel/HelmTiller 18.19
29 TestAddons/parallel/CSI 42.61
30 TestAddons/parallel/Headlamp 11.1
31 TestAddons/parallel/CloudSpanner 5.48
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 13.55
36 TestCertOptions 112.05
37 TestCertExpiration 294.18
38 TestDockerFlags 131.5
39 TestForceSystemdFlag 59.23
40 TestForceSystemdEnv 92.43
41 TestKVMDriverInstallOrUpdate 5.72
45 TestErrorSpam/setup 55.94
46 TestErrorSpam/start 0.41
47 TestErrorSpam/status 0.77
48 TestErrorSpam/pause 1.26
49 TestErrorSpam/unpause 1.37
50 TestErrorSpam/stop 3.59
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 81.54
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 45.11
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 4.07
62 TestFunctional/serial/CacheCmd/cache/add_local 1.45
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.13
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
70 TestFunctional/serial/ExtraConfig 46.84
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 1.17
73 TestFunctional/serial/LogsFileCmd 1.15
75 TestFunctional/parallel/ConfigCmd 0.49
77 TestFunctional/parallel/DryRun 0.35
78 TestFunctional/parallel/InternationalLanguage 0.17
79 TestFunctional/parallel/StatusCmd 1.07
82 TestFunctional/parallel/ServiceCmd 13.59
83 TestFunctional/parallel/ServiceCmdConnect 12.56
84 TestFunctional/parallel/AddonsCmd 0.17
85 TestFunctional/parallel/PersistentVolumeClaim 54.69
87 TestFunctional/parallel/SSHCmd 0.47
88 TestFunctional/parallel/CpCmd 1.01
89 TestFunctional/parallel/MySQL 32.02
90 TestFunctional/parallel/FileSync 0.25
91 TestFunctional/parallel/CertSync 1.52
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
99 TestFunctional/parallel/License 0.53
108 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
109 TestFunctional/parallel/ProfileCmd/profile_list 0.36
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
111 TestFunctional/parallel/MountCmd/any-port 10.56
112 TestFunctional/parallel/MountCmd/specific-port 2.08
113 TestFunctional/parallel/Version/short 0.11
114 TestFunctional/parallel/Version/components 0.81
115 TestFunctional/parallel/DockerEnv/bash 1.18
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
123 TestFunctional/parallel/ImageCommands/ImageBuild 4.76
124 TestFunctional/parallel/ImageCommands/Setup 2.04
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.63
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.79
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.17
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.88
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.47
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.42
132 TestFunctional/delete_addon-resizer_images 0.08
133 TestFunctional/delete_my-image_image 0.02
134 TestFunctional/delete_minikube_cached_images 0.02
135 TestGvisorAddon 273.72
137 TestIngressAddonLegacy/StartLegacyK8sCluster 111.6
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.27
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.46
141 TestIngressAddonLegacy/serial/ValidateIngressAddons 32.18
144 TestJSONOutput/start/Command 70.6
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.64
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.61
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 13.13
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.26
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 110.28
176 TestMountStart/serial/StartWithMountFirst 28.01
177 TestMountStart/serial/VerifyMountFirst 0.43
178 TestMountStart/serial/StartWithMountSecond 30.74
179 TestMountStart/serial/VerifyMountSecond 0.43
180 TestMountStart/serial/DeleteFirst 0.9
181 TestMountStart/serial/VerifyMountPostDelete 0.42
182 TestMountStart/serial/Stop 2.1
183 TestMountStart/serial/RestartStopped 23.09
184 TestMountStart/serial/VerifyMountPostStop 0.43
187 TestMultiNode/serial/FreshStart2Nodes 158.13
188 TestMultiNode/serial/DeployApp2Nodes 5.37
189 TestMultiNode/serial/PingHostFrom2Pods 0.95
190 TestMultiNode/serial/AddNode 60.82
191 TestMultiNode/serial/ProfileList 0.24
192 TestMultiNode/serial/CopyFile 8.11
193 TestMultiNode/serial/StopNode 3.99
194 TestMultiNode/serial/StartAfterStop 31.39
195 TestMultiNode/serial/RestartKeepsNodes 879.48
196 TestMultiNode/serial/DeleteNode 3.84
197 TestMultiNode/serial/StopMultiNode 15.37
198 TestMultiNode/serial/RestartMultiNode 595.13
204 TestPreload 195.5
206 TestScheduledStopUnix 125.18
207 TestSkaffold 88.91
210 TestRunningBinaryUpgrade 198.46
212 TestKubernetesUpgrade 199.17
225 TestStoppedBinaryUpgrade/Setup 1.61
226 TestStoppedBinaryUpgrade/Upgrade 178.42
228 TestPause/serial/Start 83.34
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
231 TestNoKubernetes/serial/StartWithK8s 68.75
239 TestNetworkPlugins/group/auto/Start 110.64
240 TestPause/serial/SecondStartNoReconfiguration 79.03
241 TestNoKubernetes/serial/StartWithStopK8s 46.67
242 TestStoppedBinaryUpgrade/MinikubeLogs 1.32
243 TestNetworkPlugins/group/kindnet/Start 101.15
244 TestNoKubernetes/serial/Start 39.39
245 TestPause/serial/Pause 1.16
246 TestPause/serial/VerifyStatus 0.34
247 TestPause/serial/Unpause 0.81
248 TestPause/serial/PauseAgain 1.02
249 TestPause/serial/DeletePaused 1.31
250 TestPause/serial/VerifyDeletedResources 0.63
251 TestNetworkPlugins/group/cilium/Start 128.6
252 TestNetworkPlugins/group/auto/KubeletFlags 0.24
253 TestNetworkPlugins/group/auto/NetCatPod 13.35
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
255 TestNoKubernetes/serial/ProfileList 2.02
256 TestNoKubernetes/serial/Stop 2.13
257 TestNoKubernetes/serial/StartNoArgs 41.48
258 TestNetworkPlugins/group/auto/DNS 0.21
259 TestNetworkPlugins/group/auto/Localhost 0.17
260 TestNetworkPlugins/group/auto/HairPin 5.18
261 TestNetworkPlugins/group/calico/Start 373.97
262 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
263 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
264 TestNetworkPlugins/group/kindnet/NetCatPod 13.41
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestNetworkPlugins/group/custom-flannel/Start 112.41
267 TestNetworkPlugins/group/kindnet/DNS 0.19
268 TestNetworkPlugins/group/kindnet/Localhost 0.16
269 TestNetworkPlugins/group/kindnet/HairPin 0.18
270 TestNetworkPlugins/group/false/Start 122.14
271 TestNetworkPlugins/group/cilium/ControllerPod 5.04
272 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
273 TestNetworkPlugins/group/cilium/NetCatPod 18.56
274 TestNetworkPlugins/group/cilium/DNS 0.26
275 TestNetworkPlugins/group/cilium/Localhost 0.23
276 TestNetworkPlugins/group/cilium/HairPin 0.21
277 TestNetworkPlugins/group/flannel/Start 83.48
278 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
279 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
280 TestNetworkPlugins/group/custom-flannel/DNS 0.29
281 TestNetworkPlugins/group/custom-flannel/Localhost 0.23
282 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
283 TestNetworkPlugins/group/enable-default-cni/Start 82.03
284 TestNetworkPlugins/group/false/KubeletFlags 0.28
285 TestNetworkPlugins/group/false/NetCatPod 12.39
286 TestNetworkPlugins/group/false/DNS 0.19
287 TestNetworkPlugins/group/false/Localhost 0.17
288 TestNetworkPlugins/group/false/HairPin 5.17
289 TestNetworkPlugins/group/bridge/Start 80.15
290 TestNetworkPlugins/group/flannel/ControllerPod 5.02
291 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
292 TestNetworkPlugins/group/flannel/NetCatPod 16.34
293 TestNetworkPlugins/group/flannel/DNS 0.27
294 TestNetworkPlugins/group/flannel/Localhost 0.22
295 TestNetworkPlugins/group/flannel/HairPin 0.17
296 TestNetworkPlugins/group/kubenet/Start 81.74
297 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
298 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.31
299 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
300 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
301 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
303 TestStartStop/group/old-k8s-version/serial/FirstStart 152.96
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
305 TestNetworkPlugins/group/bridge/NetCatPod 15.43
306 TestNetworkPlugins/group/bridge/DNS 0.19
307 TestNetworkPlugins/group/bridge/Localhost 0.15
308 TestNetworkPlugins/group/bridge/HairPin 0.15
310 TestStartStop/group/no-preload/serial/FirstStart 102.02
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
312 TestNetworkPlugins/group/kubenet/NetCatPod 15.4
313 TestNetworkPlugins/group/kubenet/DNS 0.21
314 TestNetworkPlugins/group/kubenet/Localhost 0.21
316 TestNetworkPlugins/group/calico/ControllerPod 5.02
317 TestNetworkPlugins/group/calico/KubeletFlags 0.24
318 TestNetworkPlugins/group/calico/NetCatPod 13.4
319 TestStartStop/group/no-preload/serial/DeployApp 9.5
320 TestNetworkPlugins/group/calico/DNS 0.24
321 TestNetworkPlugins/group/calico/Localhost 0.16
322 TestNetworkPlugins/group/calico/HairPin 0.17
324 TestStartStop/group/embed-certs/serial/FirstStart 77.28
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
326 TestStartStop/group/no-preload/serial/Stop 13.37
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.19
329 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
330 TestStartStop/group/no-preload/serial/SecondStart 359.79
331 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
332 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
333 TestStartStop/group/old-k8s-version/serial/Stop 4.57
334 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.28
335 TestStartStop/group/old-k8s-version/serial/SecondStart 111.3
336 TestStartStop/group/embed-certs/serial/DeployApp 11.4
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
338 TestStartStop/group/embed-certs/serial/Stop 4.13
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
340 TestStartStop/group/embed-certs/serial/SecondStart 333.3
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
343 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.22
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
345 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 328.18
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
347 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
349 TestStartStop/group/old-k8s-version/serial/Pause 2.77
351 TestStartStop/group/newest-cni/serial/FirstStart 78.04
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
354 TestStartStop/group/newest-cni/serial/Stop 4.12
355 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/newest-cni/serial/SecondStart 40.39
357 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
360 TestStartStop/group/newest-cni/serial/Pause 2.55
361 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.02
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
364 TestStartStop/group/no-preload/serial/Pause 2.9
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.02
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
368 TestStartStop/group/embed-certs/serial/Pause 2.54
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.02
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
371 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
372 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
x
+
TestDownloadOnly/v1.16.0/json-events (23.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100557 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (23.431033366s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100557
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100557: exit status 85 (85.661487ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100557 | jenkins | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p download-only-100557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:05:57
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:05:57.833299   10863 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:05:57.833397   10863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:05:57.833406   10863 out.go:309] Setting ErrFile to fd 2...
	I0114 10:05:57.833410   10863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:05:57.833508   10863 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	W0114 10:05:57.833617   10863 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-4002/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-4002/.minikube/config/config.json: no such file or directory
	I0114 10:05:57.834122   10863 out.go:303] Setting JSON to true
	I0114 10:05:57.834984   10863 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2904,"bootTime":1673687854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:05:57.835038   10863 start.go:135] virtualization: kvm guest
	I0114 10:05:57.837583   10863 out.go:97] [download-only-100557] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:05:57.837662   10863 notify.go:220] Checking for updates...
	I0114 10:05:57.839261   10863 out.go:169] MINIKUBE_LOCATION=15642
	W0114 10:05:57.837683   10863 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:05:57.842051   10863 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:05:57.843564   10863 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 10:05:57.845101   10863 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 10:05:57.846735   10863 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:05:57.849706   10863 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:05:57.849894   10863 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:05:57.968121   10863 out.go:97] Using the kvm2 driver based on user configuration
	I0114 10:05:57.968147   10863 start.go:294] selected driver: kvm2
	I0114 10:05:57.968161   10863 start.go:838] validating driver "kvm2" against <nil>
	I0114 10:05:57.968442   10863 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:05:57.968723   10863 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-4002/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 10:05:57.982892   10863 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 10:05:57.982962   10863 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:05:57.983387   10863 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0114 10:05:57.983508   10863 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 10:05:57.983538   10863 cni.go:95] Creating CNI manager for ""
	I0114 10:05:57.983544   10863 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:05:57.983555   10863 start_flags.go:319] config:
	{Name:download-only-100557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:05:57.983720   10863 iso.go:125] acquiring lock: {Name:mkc2d7f29725a7214ea1a3adcbd594f3dbbcd423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:05:57.985737   10863 out.go:97] Downloading VM boot image ...
	I0114 10:05:57.985761   10863 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15642-4002/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso
	I0114 10:06:07.445804   10863 out.go:97] Starting control plane node download-only-100557 in cluster download-only-100557
	I0114 10:06:07.445822   10863 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 10:06:07.536229   10863 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0114 10:06:07.536259   10863 cache.go:57] Caching tarball of preloaded images
	I0114 10:06:07.536463   10863 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0114 10:06:07.538491   10863 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 10:06:07.538513   10863 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:06:07.636053   10863 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (14.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100557 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100557 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=kvm2 : (14.193985805s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (14.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100557
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100557: exit status 85 (84.685381ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100557 | jenkins | v1.28.0 | 14 Jan 23 10:05 UTC |          |
	|         | -p download-only-100557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-100557 | jenkins | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p download-only-100557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:06:21
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:06:21.352913   10899 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:06:21.353015   10899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:21.353025   10899 out.go:309] Setting ErrFile to fd 2...
	I0114 10:06:21.353029   10899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:21.353130   10899 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	W0114 10:06:21.353229   10899 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-4002/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-4002/.minikube/config/config.json: no such file or directory
	I0114 10:06:21.353612   10899 out.go:303] Setting JSON to true
	I0114 10:06:21.354344   10899 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2928,"bootTime":1673687854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:06:21.354399   10899 start.go:135] virtualization: kvm guest
	I0114 10:06:21.356990   10899 out.go:97] [download-only-100557] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:06:21.358771   10899 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:06:21.357146   10899 notify.go:220] Checking for updates...
	I0114 10:06:21.361869   10899 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:06:21.363360   10899 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 10:06:21.364805   10899 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 10:06:21.366295   10899 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:06:21.369020   10899 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:06:21.369428   10899 config.go:180] Loaded profile config "download-only-100557": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0114 10:06:21.369473   10899 start.go:746] api.Load failed for download-only-100557: filestore "download-only-100557": Docker machine "download-only-100557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:21.369545   10899 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 10:06:21.369584   10899 start.go:746] api.Load failed for download-only-100557: filestore "download-only-100557": Docker machine "download-only-100557" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:21.401992   10899 out.go:97] Using the kvm2 driver based on existing profile
	I0114 10:06:21.402030   10899 start.go:294] selected driver: kvm2
	I0114 10:06:21.402043   10899 start.go:838] validating driver "kvm2" against &{Name:download-only-100557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100557 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:21.402387   10899 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:21.402591   10899 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-4002/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 10:06:21.416887   10899 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 10:06:21.417740   10899 cni.go:95] Creating CNI manager for ""
	I0114 10:06:21.417758   10899 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0114 10:06:21.417773   10899 start_flags.go:319] config:
	{Name:download-only-100557 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-100557 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:21.417918   10899 iso.go:125] acquiring lock: {Name:mkc2d7f29725a7214ea1a3adcbd594f3dbbcd423 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:21.419962   10899 out.go:97] Starting control plane node download-only-100557 in cluster download-only-100557
	I0114 10:06:21.419978   10899 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 10:06:21.515404   10899 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I0114 10:06:21.515432   10899 cache.go:57] Caching tarball of preloaded images
	I0114 10:06:21.515619   10899 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I0114 10:06:21.517768   10899 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 10:06:21.517794   10899 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I0114 10:06:21.615309   10899 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /home/jenkins/minikube-integration/15642-4002/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-100557
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-100636 --alsologtostderr --binary-mirror http://127.0.0.1:44659 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-100636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-100636
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (97.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-110752 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-110752 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m35.987201009s)
helpers_test.go:175: Cleaning up "offline-docker-110752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-110752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-110752: (1.053726084s)
--- PASS: TestOffline (97.04s)

                                                
                                    
x
+
TestAddons/Setup (150.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-100636 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-100636 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.407181095s)
--- PASS: TestAddons/Setup (150.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 15.894374ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-j7f67" [fff636dd-92f8-44e5-8326-d475369a1cbd] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016762359s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-2cttz" [844e17c0-ff58-49ce-878d-5f8adbc7a741] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008540081s
addons_test.go:297: (dbg) Run:  kubectl --context addons-100636 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-100636 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-100636 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.540073815s)
addons_test.go:316: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 ip
2023/01/14 10:09:24 [DEBUG] GET http://192.168.39.184:5000
addons_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-100636 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:189: (dbg) Run:  kubectl --context addons-100636 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) Run:  kubectl --context addons-100636 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [22de61b7-4fa4-4b50-8ecd-f73b8ec9bfa3] Pending
helpers_test.go:342: "nginx" [22de61b7-4fa4-4b50-8ecd-f73b8ec9bfa3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [22de61b7-4fa4-4b50-8ecd-f73b8ec9bfa3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.012734268s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-100636 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.39.184
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-100636 addons disable ingress-dns --alsologtostderr -v=1: (1.945057721s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p addons-100636 addons disable ingress --alsologtostderr -v=1: (7.715230906s)
--- PASS: TestAddons/parallel/Ingress (24.69s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 1.911351ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-56c6cfbdd9-z6fll" [6c24c222-9fe2-4f17-b7b6-3b029e2b5241] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010099922s
addons_test.go:372: (dbg) Run:  kubectl --context addons-100636 top pods -n kube-system
addons_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 3.176925ms
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-vlrvn" [c5093407-ad47-4f86-b534-2c9b4bd2c3b3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010264168s
addons_test.go:430: (dbg) Run:  kubectl --context addons-100636 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-100636 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (12.486040568s)
addons_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 27.314969ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-100636 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100636 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100636 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-100636 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [2e26bfca-badd-4075-8b18-07e8ff1ff4fe] Pending
helpers_test.go:342: "task-pv-pod" [2e26bfca-badd-4075-8b18-07e8ff1ff4fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [2e26bfca-badd-4075-8b18-07e8ff1ff4fe] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.02541798s
addons_test.go:541: (dbg) Run:  kubectl --context addons-100636 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100636 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100636 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-100636 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:551: (dbg) Done: kubectl --context addons-100636 delete pod task-pv-pod: (1.554213022s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-100636 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-100636 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100636 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-100636 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [ba84dec3-f3c4-40f9-81f3-774f7a44d7cd] Pending
helpers_test.go:342: "task-pv-pod-restore" [ba84dec3-f3c4-40f9-81f3-774f7a44d7cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [ba84dec3-f3c4-40f9-81f3-774f7a44d7cd] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.014801636s
addons_test.go:583: (dbg) Run:  kubectl --context addons-100636 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:583: (dbg) Done: kubectl --context addons-100636 delete pod task-pv-pod-restore: (1.103210399s)
addons_test.go:587: (dbg) Run:  kubectl --context addons-100636 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-100636 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:595: (dbg) Done: out/minikube-linux-amd64 -p addons-100636 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.902857515s)
addons_test.go:599: (dbg) Run:  out/minikube-linux-amd64 -p addons-100636 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.61s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-100636 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-100636 --alsologtostderr -v=1: (1.076265625s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-7cc8b" [8254edab-7806-4358-93fc-f8215729a785] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-7cc8b" [8254edab-7806-4358-93fc-f8215729a785] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-7cc8b" [8254edab-7806-4358-93fc-f8215729a785] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.024834174s
--- PASS: TestAddons/parallel/Headlamp (11.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-7b59c" [1f744ffc-1731-4603-a44d-7ebce1956a51] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014210204s
addons_test.go:798: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-100636
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-100636 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-100636 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-100636
addons_test.go:139: (dbg) Done: out/minikube-linux-amd64 stop -p addons-100636: (13.330615083s)
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-100636
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-100636
--- PASS: TestAddons/StoppedEnableDisable (13.55s)

                                                
                                    
x
+
TestCertOptions (112.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-110752 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-110752 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m50.399385937s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-110752 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-110752 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-110752 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-110752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-110752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-110752: (1.093560265s)
--- PASS: TestCertOptions (112.05s)

                                                
                                    
x
+
TestCertExpiration (294.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110752 --memory=2048 --cert-expiration=3m --driver=kvm2 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110752 --memory=2048 --cert-expiration=3m --driver=kvm2 : (53.723457388s)
E0114 11:08:50.119391   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 11:09:07.068963   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110752 --memory=2048 --cert-expiration=8760h --driver=kvm2 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110752 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (59.045336901s)
helpers_test.go:175: Cleaning up "cert-expiration-110752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-110752
E0114 11:12:46.722378   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-110752: (1.405743785s)
--- PASS: TestCertExpiration (294.18s)

                                                
                                    
x
+
TestDockerFlags (131.5s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-110752 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-110752 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (2m9.801053055s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-110752 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-110752 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-110752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-110752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-110752: (1.09623948s)
--- PASS: TestDockerFlags (131.50s)

                                                
                                    
x
+
TestForceSystemdFlag (59.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-110935 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-110935 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (57.669058362s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-110935 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-110935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-110935
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-110935: (1.242373043s)
--- PASS: TestForceSystemdFlag (59.23s)

                                                
                                    
x
+
TestForceSystemdEnv (92.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-111004 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-111004 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m30.906663462s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-111004 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-111004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-111004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-111004: (1.18141822s)
--- PASS: TestForceSystemdEnv (92.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.72s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.72s)

                                                
                                    
x
+
TestErrorSpam/setup (55.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-101826 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-101826 --driver=kvm2 
E0114 10:19:07.069116   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.074736   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.084943   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.105198   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.145464   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.225786   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.386156   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:07.706710   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:08.347694   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:09.627960   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:12.188765   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:19:17.309310   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-101826 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-101826 --driver=kvm2 : (55.942531956s)
--- PASS: TestErrorSpam/setup (55.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 pause
--- PASS: TestErrorSpam/pause (1.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (3.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 stop
E0114 10:19:27.550368   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 stop: (3.406912265s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-101826 --log_dir /tmp/nospam-101826 stop
--- PASS: TestErrorSpam/stop (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15642-4002/.minikube/files/etc/test/nested/copy/10851/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E0114 10:19:48.030623   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:20:28.991780   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-101929 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m21.535946437s)
--- PASS: TestFunctional/serial/StartWithProxy (81.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-101929 --alsologtostderr -v=8: (45.113555515s)
functional_test.go:656: soft start took 45.114185433s for "functional-101929" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-101929 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:3.1: (1.465904836s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:3.3: (1.394074199s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 cache add k8s.gcr.io/pause:latest: (1.210808052s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-101929 /tmp/TestFunctionalserialCacheCmdcacheadd_local3648899769/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache add minikube-local-cache-test:functional-101929
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 cache add minikube-local-cache-test:functional-101929: (1.222113424s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache delete minikube-local-cache-test:functional-101929
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-101929
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (229.712663ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 kubectl -- --context functional-101929 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-101929 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0114 10:21:50.915109   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-101929 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.836779249s)
functional_test.go:754: restart took 46.836913781s for "functional-101929" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-101929 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 logs: (1.171883495s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 logs --file /tmp/TestFunctionalserialLogsFileCmd1814031302/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 logs --file /tmp/TestFunctionalserialLogsFileCmd1814031302/001/logs.txt: (1.149118356s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 config get cpus: exit status 14 (76.502871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 config get cpus: exit status 14 (71.832157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-101929 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (183.71767ms)

                                                
                                                
-- stdout --
	* [functional-101929] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:22:48.276818   15884 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:22:48.276945   15884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.276957   15884 out.go:309] Setting ErrFile to fd 2...
	I0114 10:22:48.276965   15884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.277063   15884 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 10:22:48.277621   15884 out.go:303] Setting JSON to false
	I0114 10:22:48.278738   15884 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3914,"bootTime":1673687854,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:22:48.278798   15884 start.go:135] virtualization: kvm guest
	I0114 10:22:48.281233   15884 out.go:177] * [functional-101929] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:22:48.282727   15884 notify.go:220] Checking for updates...
	I0114 10:22:48.284307   15884 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:22:48.285867   15884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:22:48.287287   15884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 10:22:48.288809   15884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 10:22:48.290516   15884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:22:48.292510   15884 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:22:48.294880   15884 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.294917   15884 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.316321   15884 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43205
	I0114 10:22:48.316742   15884 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.317329   15884 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.317350   15884 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.317698   15884 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.317858   15884 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.318025   15884 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:22:48.318334   15884 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.318353   15884 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.333648   15884 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35635
	I0114 10:22:48.334158   15884 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.334707   15884 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.334724   15884 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.334977   15884 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.335145   15884 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.371646   15884 out.go:177] * Using the kvm2 driver based on existing profile
	I0114 10:22:48.372926   15884 start.go:294] selected driver: kvm2
	I0114 10:22:48.372943   15884 start.go:838] validating driver "kvm2" against &{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:22:48.373081   15884 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:22:48.375235   15884 out.go:177] 
	W0114 10:22:48.376657   15884 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 10:22:48.377892   15884 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-101929 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-101929 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (167.889551ms)

                                                
                                                
-- stdout --
	* [functional-101929] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:22:48.103092   15824 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:22:48.103265   15824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.103286   15824 out.go:309] Setting ErrFile to fd 2...
	I0114 10:22:48.103297   15824 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:22:48.103475   15824 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 10:22:48.103991   15824 out.go:303] Setting JSON to false
	I0114 10:22:48.104893   15824 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3914,"bootTime":1673687854,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:22:48.104971   15824 start.go:135] virtualization: kvm guest
	I0114 10:22:48.107913   15824 out.go:177] * [functional-101929] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0114 10:22:48.109764   15824 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:22:48.109668   15824 notify.go:220] Checking for updates...
	I0114 10:22:48.111492   15824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:22:48.113028   15824 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	I0114 10:22:48.114517   15824 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	I0114 10:22:48.115921   15824 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:22:48.117617   15824 config.go:180] Loaded profile config "functional-101929": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:22:48.118053   15824 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.118104   15824 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.136365   15824 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0114 10:22:48.136744   15824 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.137265   15824 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.137286   15824 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.137626   15824 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.137801   15824 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.138069   15824 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:22:48.138461   15824 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:22:48.138494   15824 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:22:48.153476   15824 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0114 10:22:48.153943   15824 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:22:48.154491   15824 main.go:134] libmachine: Using API Version  1
	I0114 10:22:48.154519   15824 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:22:48.154874   15824 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:22:48.155129   15824 main.go:134] libmachine: (functional-101929) Calling .DriverName
	I0114 10:22:48.187638   15824 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0114 10:22:48.188987   15824 start.go:294] selected driver: kvm2
	I0114 10:22:48.189014   15824 start.go:838] validating driver "kvm2" against &{Name:functional-101929 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-101929 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.97 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fals
e nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:22:48.189181   15824 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:22:48.191748   15824 out.go:177] 
	W0114 10:22:48.193133   15824 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 10:22:48.194577   15824 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-101929 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-101929 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-p2jf4" [0f571111-f855-47b8-8e09-3b795fa3cb4f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-p2jf4" [0f571111-f855-47b8-8e09-3b795fa3cb4f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.017431766s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.39.97:32425
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.39.97:32425
--- PASS: TestFunctional/parallel/ServiceCmd (13.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-101929 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1565: (dbg) Run:  kubectl --context functional-101929 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-qmp48" [39e7ba65-ae19-4715-a103-4c783174ba01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-qmp48" [39e7ba65-ae19-4715-a103-4c783174ba01] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.014004217s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.39.97:30043
functional_test.go:1605: http://192.168.39.97:30043: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-qmp48

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.97:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.97:30043
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "storage-provisioner" [ea36b82b-13e4-4f85-87f3-9ae5f50aede0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01815526s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-101929 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-101929 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-101929 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-101929 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [db28c053-2b62-4cfe-9a29-f19ea84f3788] Pending
helpers_test.go:342: "sp-pod" [db28c053-2b62-4cfe-9a29-f19ea84f3788] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [db28c053-2b62-4cfe-9a29-f19ea84f3788] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.011178922s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-101929 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-101929 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-101929 delete -f testdata/storage-provisioner/pod.yaml: (1.709805728s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-101929 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [e021a905-c087-4199-9568-581c1ae37d6a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [e021a905-c087-4199-9568-581c1ae37d6a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [e021a905-c087-4199-9568-581c1ae37d6a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.012261559s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-101929 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh -n functional-101929 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 cp functional-101929:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3648194786/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh -n functional-101929 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-101929 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-mphb5" [d2b9a27b-e145-480f-ab22-9370cbc49fe6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-mphb5" [d2b9a27b-e145-480f-ab22-9370cbc49fe6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.015021348s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;": exit status 1 (388.665302ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;": exit status 1 (202.197847ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;": exit status 1 (316.552677ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-101929 exec mysql-596b7fcdbf-mphb5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/10851/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /etc/test/nested/copy/10851/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/10851.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /etc/ssl/certs/10851.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/10851.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /usr/share/ca-certificates/10851.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/108512.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /etc/ssl/certs/108512.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/108512.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /usr/share/ca-certificates/108512.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-101929 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh "sudo systemctl is-active crio": exit status 1 (277.589433ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "283.84329ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "72.046064ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "236.507967ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "73.161078ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-101929 /tmp/TestFunctionalparallelMountCmdany-port1369951312/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673691756250419096" to /tmp/TestFunctionalparallelMountCmdany-port1369951312/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673691756250419096" to /tmp/TestFunctionalparallelMountCmdany-port1369951312/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673691756250419096" to /tmp/TestFunctionalparallelMountCmdany-port1369951312/001/test-1673691756250419096
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.260169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 14 10:22 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 14 10:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 14 10:22 test-1673691756250419096
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh cat /mount-9p/test-1673691756250419096
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-101929 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [79f32661-00fa-4f08-8bdf-e3fccba88898] Pending
helpers_test.go:342: "busybox-mount" [79f32661-00fa-4f08-8bdf-e3fccba88898] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [79f32661-00fa-4f08-8bdf-e3fccba88898] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [79f32661-00fa-4f08-8bdf-e3fccba88898] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.010146682s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-101929 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101929 /tmp/TestFunctionalparallelMountCmdany-port1369951312/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-101929 /tmp/TestFunctionalparallelMountCmdspecific-port3360609635/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.38157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101929 /tmp/TestFunctionalparallelMountCmdspecific-port3360609635/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh "sudo umount -f /mount-9p": exit status 1 (262.503567ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-101929 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-101929 /tmp/TestFunctionalparallelMountCmdspecific-port3360609635/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-101929 docker-env) && out/minikube-linux-amd64 status -p functional-101929"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-101929 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101929 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-101929
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-101929
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101929 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| gcr.io/google-containers/addon-resizer      | functional-101929 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-101929 | fe4f465762900 | 30B    |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| docker.io/library/mysql                     | 5.7               | d410f4167eea9 | 495MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101929 image ls --format json:
[{"id":"fe4f465762900d7bf9d3bd81fe2c69c2f4cd449a890838199c2513f5ac404eef","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-101929"],"size":"30"},{"id":"d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bd
b1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest
"],"size":"142000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-101929"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101929 image ls --format yaml:
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: fe4f465762900d7bf9d3bd81fe2c69c2f4cd449a890838199c2513f5ac404eef
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-101929
size: "30"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-101929
size: "32900000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-101929 ssh pgrep buildkitd: exit status 1 (309.629035ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image build -t localhost/my-image:functional-101929 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image build -t localhost/my-image:functional-101929 testdata/build: (4.211679209s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-101929 image build -t localhost/my-image:functional-101929 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 47b19c3cbb6e
Removing intermediate container 47b19c3cbb6e
---> 0c2622f2765d
Step 3/3 : ADD content.txt /
---> da9da5899c3d
Successfully built da9da5899c3d
Successfully tagged localhost/my-image:functional-101929
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.011903811s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-101929
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929: (4.37928646s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929: (2.557007728s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.897458626s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-101929
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image load --daemon gcr.io/google-containers/addon-resizer:functional-101929: (3.952392777s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image save gcr.io/google-containers/addon-resizer:functional-101929 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image save gcr.io/google-containers/addon-resizer:functional-101929 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (1.88373321s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image rm gcr.io/google-containers/addon-resizer:functional-101929
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (2.184596361s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-101929
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-101929 image save --daemon gcr.io/google-containers/addon-resizer:functional-101929
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-101929 image save --daemon gcr.io/google-containers/addon-resizer:functional-101929: (3.371199704s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-101929
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.42s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-101929
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-101929
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-101929
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (273.72s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-110944 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-110944 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m15.322393726s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-110944 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-110944 cache add gcr.io/k8s-minikube/gvisor-addon:2: (21.825702516s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-110944 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-110944 addons enable gvisor: (4.135004396s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:342: "gvisor" [56dd0d9a-29bf-4996-bef9-d7e43f75df46] Running
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.72.4:8443: connect: connection refused

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 21.020784227s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-110944 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:73: (dbg) Done: kubectl --context gvisor-110944 replace --force -f testdata/nginx-untrusted.yaml: (1.128723152s)
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-110944 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:342: "nginx-untrusted" [c2af9226-eeb0-460d-afc1-2e396f3a1d12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx-untrusted" [c2af9226-eeb0-460d-afc1-2e396f3a1d12] Running
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.72.4:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.72.4:8443: connect: connection refused
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 45.011434133s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:342: "nginx-gvisor" [bd516229-9079-43e7-8f3e-6fa361ba2004] Running
E0114 11:12:34.194888   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.010501306s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-110944
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-110944: (2.447815865s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-110944 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0114 11:12:41.602047   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.607365   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.617691   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.637973   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.678278   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.758686   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:41.919496   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:42.240014   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:42.880714   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:12:44.161217   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-110944 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m21.348246161s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:342: "gvisor" [56dd0d9a-29bf-4996-bef9-d7e43f75df46] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
E0114 11:14:03.524644   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:14:07.068667   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.032026888s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:342: "nginx-untrusted" [c2af9226-eeb0-460d-afc1-2e396f3a1d12] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.007492354s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:342: "nginx-gvisor" [bd516229-9079-43e7-8f3e-6fa361ba2004] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.008214489s
helpers_test.go:175: Cleaning up "gvisor-110944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-110944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-110944: (1.202957035s)
--- PASS: TestGvisorAddon (273.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (111.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-102330 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0114 10:24:07.068992   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:24:34.755674   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-102330 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m51.599699441s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (111.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons enable ingress --alsologtostderr -v=5: (18.274337707s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (32.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-102330 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-102330 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.60221226s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-102330 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-102330 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [1c30ec35-e2bf-4258-aecd-375a724d2af0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [1c30ec35-e2bf-4258-aecd-375a724d2af0] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.017975435s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-102330 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.39.154
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons disable ingress-dns --alsologtostderr -v=1: (2.082780438s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons disable ingress --alsologtostderr -v=1
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102330 addons disable ingress --alsologtostderr -v=1: (7.356840415s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (32.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-102613 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-102613 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m10.598063624s)
--- PASS: TestJSONOutput/start/Command (70.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-102613 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-102613 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-102613 --output=json --user=testUser
E0114 10:27:34.196738   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.202004   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.212232   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.232468   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.272774   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.353132   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.513562   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:34.834131   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:35.475064   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:36.755560   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-102613 --output=json --user=testUser: (13.125280966s)
--- PASS: TestJSONOutput/stop/Command (13.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-102739 --memory=2200 --output=json --wait=true --driver=fail
E0114 10:27:39.316674   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-102739 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.018681ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ba3f3bd-c68f-4794-85a9-9d7f0be6f944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102739] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fa212d0-c0a6-48ef-bdb0-96665ed11e42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"31e1667a-7c00-424e-9cf5-688bcbade0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5ef7ea62-5f9f-4d94-ad58-36a0b66d16dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig"}}
	{"specversion":"1.0","id":"cede67bc-8bc3-4060-9cb8-bafddfa4c310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube"}}
	{"specversion":"1.0","id":"35c7902a-f6aa-413d-bcda-dcb3dccf6cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"47f3b982-4c6d-498d-8d52-1f733ef65d36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-102739
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (110.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-102739 --driver=kvm2 
E0114 10:27:44.437413   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:27:54.678154   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:28:15.158527   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-102739 --driver=kvm2 : (53.39705103s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-102739 --driver=kvm2 
E0114 10:28:56.119624   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:29:07.069113   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-102739 --driver=kvm2 : (53.989113165s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-102739
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-102739
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-102739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-102739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-102739: (1.002500751s)
helpers_test.go:175: Cleaning up "first-102739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-102739
--- PASS: TestMinikubeProfile (110.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-102929 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-102929 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.006907634s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-102929 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-102929 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102929 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0114 10:30:18.040750   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102929 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.74368646s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-102929 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-102929
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-102929: (2.102597943s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-102929
E0114 10:30:40.545173   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.550448   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.560693   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.581032   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.621355   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.701694   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:40.862100   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:41.182631   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:41.823697   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:43.104169   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:45.664943   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:30:50.785835   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-102929: (22.089530378s)
--- PASS: TestMountStart/serial/RestartStopped (23.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-102929 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (158.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103057 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0114 10:31:01.026496   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:31:21.507357   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:32:02.467510   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:32:34.193902   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:33:01.881284   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:33:24.388647   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103057 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m37.689868263s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (158.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-103057 -- rollout status deployment/busybox: (3.461176901s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-kllnh -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-pr2rn -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-kllnh -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-pr2rn -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-kllnh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-pr2rn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-kllnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-kllnh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-pr2rn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103057 -- exec busybox-65db55d5d6-pr2rn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103057 -v 3 --alsologtostderr
E0114 10:34:07.069237   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-103057 -v 3 --alsologtostderr: (1m0.222279247s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp testdata/cp-test.txt multinode-103057:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122000101/001/cp-test_multinode-103057.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057:/home/docker/cp-test.txt multinode-103057-m02:/home/docker/cp-test_multinode-103057_multinode-103057-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test_multinode-103057_multinode-103057-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057:/home/docker/cp-test.txt multinode-103057-m03:/home/docker/cp-test_multinode-103057_multinode-103057-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test_multinode-103057_multinode-103057-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp testdata/cp-test.txt multinode-103057-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122000101/001/cp-test_multinode-103057-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m02:/home/docker/cp-test.txt multinode-103057:/home/docker/cp-test_multinode-103057-m02_multinode-103057.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test_multinode-103057-m02_multinode-103057.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m02:/home/docker/cp-test.txt multinode-103057-m03:/home/docker/cp-test_multinode-103057-m02_multinode-103057-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test_multinode-103057-m02_multinode-103057-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp testdata/cp-test.txt multinode-103057-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile122000101/001/cp-test_multinode-103057-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt multinode-103057:/home/docker/cp-test_multinode-103057-m03_multinode-103057.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057 "sudo cat /home/docker/cp-test_multinode-103057-m03_multinode-103057.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 cp multinode-103057-m03:/home/docker/cp-test.txt multinode-103057-m02:/home/docker/cp-test_multinode-103057-m03_multinode-103057-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 ssh -n multinode-103057-m02 "sudo cat /home/docker/cp-test_multinode-103057-m03_multinode-103057-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-103057 node stop m03: (3.106438367s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103057 status: exit status 7 (435.351217ms)

                                                
                                                
-- stdout --
	multinode-103057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103057-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103057-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr: exit status 7 (443.423618ms)

                                                
                                                
-- stdout --
	multinode-103057
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103057-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103057-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:34:54.730866   22237 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:34:54.731166   22237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:34:54.731177   22237 out.go:309] Setting ErrFile to fd 2...
	I0114 10:34:54.731183   22237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:34:54.731362   22237 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 10:34:54.731585   22237 out.go:303] Setting JSON to false
	I0114 10:34:54.731614   22237 mustload.go:65] Loading cluster: multinode-103057
	I0114 10:34:54.731728   22237 notify.go:220] Checking for updates...
	I0114 10:34:54.732051   22237 config.go:180] Loaded profile config "multinode-103057": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:34:54.732070   22237 status.go:255] checking status of multinode-103057 ...
	I0114 10:34:54.732540   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.732614   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.748598   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0114 10:34:54.749010   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.749526   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.749549   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.749939   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.750110   22237 main.go:134] libmachine: (multinode-103057) Calling .GetState
	I0114 10:34:54.751906   22237 status.go:330] multinode-103057 host status = "Running" (err=<nil>)
	I0114 10:34:54.751925   22237 host.go:66] Checking if "multinode-103057" exists ...
	I0114 10:34:54.752224   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.752261   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.767654   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0114 10:34:54.768037   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.768560   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.768586   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.768904   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.769101   22237 main.go:134] libmachine: (multinode-103057) Calling .GetIP
	I0114 10:34:54.772026   22237 main.go:134] libmachine: (multinode-103057) DBG | domain multinode-103057 has defined MAC address 52:54:00:d4:04:d0 in network mk-multinode-103057
	I0114 10:34:54.772463   22237 main.go:134] libmachine: (multinode-103057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:04:d0", ip: ""} in network mk-multinode-103057: {Iface:virbr1 ExpiryTime:2023-01-14 11:31:11 +0000 UTC Type:0 Mac:52:54:00:d4:04:d0 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-103057 Clientid:01:52:54:00:d4:04:d0}
	I0114 10:34:54.772485   22237 main.go:134] libmachine: (multinode-103057) DBG | domain multinode-103057 has defined IP address 192.168.39.24 and MAC address 52:54:00:d4:04:d0 in network mk-multinode-103057
	I0114 10:34:54.772642   22237 host.go:66] Checking if "multinode-103057" exists ...
	I0114 10:34:54.772915   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.772958   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.788091   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0114 10:34:54.788499   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.788963   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.788983   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.789325   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.789508   22237 main.go:134] libmachine: (multinode-103057) Calling .DriverName
	I0114 10:34:54.789696   22237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:34:54.789720   22237 main.go:134] libmachine: (multinode-103057) Calling .GetSSHHostname
	I0114 10:34:54.792387   22237 main.go:134] libmachine: (multinode-103057) DBG | domain multinode-103057 has defined MAC address 52:54:00:d4:04:d0 in network mk-multinode-103057
	I0114 10:34:54.792807   22237 main.go:134] libmachine: (multinode-103057) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d4:04:d0", ip: ""} in network mk-multinode-103057: {Iface:virbr1 ExpiryTime:2023-01-14 11:31:11 +0000 UTC Type:0 Mac:52:54:00:d4:04:d0 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-103057 Clientid:01:52:54:00:d4:04:d0}
	I0114 10:34:54.792836   22237 main.go:134] libmachine: (multinode-103057) DBG | domain multinode-103057 has defined IP address 192.168.39.24 and MAC address 52:54:00:d4:04:d0 in network mk-multinode-103057
	I0114 10:34:54.792949   22237 main.go:134] libmachine: (multinode-103057) Calling .GetSSHPort
	I0114 10:34:54.793126   22237 main.go:134] libmachine: (multinode-103057) Calling .GetSSHKeyPath
	I0114 10:34:54.793267   22237 main.go:134] libmachine: (multinode-103057) Calling .GetSSHUsername
	I0114 10:34:54.793428   22237 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057/id_rsa Username:docker}
	I0114 10:34:54.882990   22237 ssh_runner.go:195] Run: systemctl --version
	I0114 10:34:54.890422   22237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:34:54.902565   22237 kubeconfig.go:92] found "multinode-103057" server: "https://192.168.39.24:8443"
	I0114 10:34:54.902590   22237 api_server.go:165] Checking apiserver status ...
	I0114 10:34:54.902616   22237 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:34:54.913573   22237 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1702/cgroup
	I0114 10:34:54.921229   22237 api_server.go:181] apiserver freezer: "11:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff5daa6ba88f637d94d726a855fde47.slice/docker-8a55dd4db16a0e06c99a3cbdc8f414c4c9dc0496efba09dcea2ccc2639947f38.scope"
	I0114 10:34:54.921302   22237 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff5daa6ba88f637d94d726a855fde47.slice/docker-8a55dd4db16a0e06c99a3cbdc8f414c4c9dc0496efba09dcea2ccc2639947f38.scope/freezer.state
	I0114 10:34:54.930875   22237 api_server.go:203] freezer state: "THAWED"
	I0114 10:34:54.930901   22237 api_server.go:252] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I0114 10:34:54.936286   22237 api_server.go:278] https://192.168.39.24:8443/healthz returned 200:
	ok
	I0114 10:34:54.936307   22237 status.go:421] multinode-103057 apiserver status = Running (err=<nil>)
	I0114 10:34:54.936315   22237 status.go:257] multinode-103057 status: &{Name:multinode-103057 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:34:54.936331   22237 status.go:255] checking status of multinode-103057-m02 ...
	I0114 10:34:54.936649   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.936696   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.952149   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0114 10:34:54.952585   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.953085   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.953117   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.953401   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.953586   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetState
	I0114 10:34:54.955135   22237 status.go:330] multinode-103057-m02 host status = "Running" (err=<nil>)
	I0114 10:34:54.955149   22237 host.go:66] Checking if "multinode-103057-m02" exists ...
	I0114 10:34:54.955466   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.955508   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.970334   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0114 10:34:54.970735   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.971126   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.971148   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.971472   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.971658   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetIP
	I0114 10:34:54.974568   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | domain multinode-103057-m02 has defined MAC address 52:54:00:94:ae:d9 in network mk-multinode-103057
	I0114 10:34:54.974998   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ae:d9", ip: ""} in network mk-multinode-103057: {Iface:virbr1 ExpiryTime:2023-01-14 11:32:38 +0000 UTC Type:0 Mac:52:54:00:94:ae:d9 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-103057-m02 Clientid:01:52:54:00:94:ae:d9}
	I0114 10:34:54.975039   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | domain multinode-103057-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:ae:d9 in network mk-multinode-103057
	I0114 10:34:54.975193   22237 host.go:66] Checking if "multinode-103057-m02" exists ...
	I0114 10:34:54.975542   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:54.975587   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:54.990157   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36823
	I0114 10:34:54.990533   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:54.990963   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:54.990988   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:54.991283   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:54.991463   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .DriverName
	I0114 10:34:54.991625   22237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:34:54.991642   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetSSHHostname
	I0114 10:34:54.994318   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | domain multinode-103057-m02 has defined MAC address 52:54:00:94:ae:d9 in network mk-multinode-103057
	I0114 10:34:54.994707   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:ae:d9", ip: ""} in network mk-multinode-103057: {Iface:virbr1 ExpiryTime:2023-01-14 11:32:38 +0000 UTC Type:0 Mac:52:54:00:94:ae:d9 Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-103057-m02 Clientid:01:52:54:00:94:ae:d9}
	I0114 10:34:54.994740   22237 main.go:134] libmachine: (multinode-103057-m02) DBG | domain multinode-103057-m02 has defined IP address 192.168.39.160 and MAC address 52:54:00:94:ae:d9 in network mk-multinode-103057
	I0114 10:34:54.994865   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetSSHPort
	I0114 10:34:54.995019   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetSSHKeyPath
	I0114 10:34:54.995168   22237 main.go:134] libmachine: (multinode-103057-m02) Calling .GetSSHUsername
	I0114 10:34:54.995316   22237 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-4002/.minikube/machines/multinode-103057-m02/id_rsa Username:docker}
	I0114 10:34:55.078265   22237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:34:55.090648   22237 status.go:257] multinode-103057-m02 status: &{Name:multinode-103057-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:34:55.090686   22237 status.go:255] checking status of multinode-103057-m03 ...
	I0114 10:34:55.091038   22237 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:34:55.091088   22237 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:34:55.105897   22237 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43945
	I0114 10:34:55.106317   22237 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:34:55.106791   22237 main.go:134] libmachine: Using API Version  1
	I0114 10:34:55.106817   22237 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:34:55.107088   22237 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:34:55.107273   22237 main.go:134] libmachine: (multinode-103057-m03) Calling .GetState
	I0114 10:34:55.108796   22237 status.go:330] multinode-103057-m03 host status = "Stopped" (err=<nil>)
	I0114 10:34:55.108812   22237 status.go:343] host is not running, skipping remaining checks
	I0114 10:34:55.108820   22237 status.go:257] multinode-103057-m03 status: &{Name:multinode-103057-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-103057 node start m03 --alsologtostderr: (30.747042441s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (879.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103057
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-103057
E0114 10:35:30.117739   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:35:40.543510   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-103057: (18.463880798s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103057 --wait=true -v=8 --alsologtostderr
E0114 10:36:08.228863   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:37:34.193949   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:39:07.069316   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:40:40.544479   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:42:34.193980   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:43:57.242143   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:44:07.069283   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:45:40.543996   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:47:03.589253   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:47:34.193984   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:49:07.069023   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103057 --wait=true -v=8 --alsologtostderr: (14m20.874813465s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103057
--- PASS: TestMultiNode/serial/RestartKeepsNodes (879.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-103057 node delete m03: (3.293436512s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-103057 stop: (15.158155255s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103057 status: exit status 7 (106.001722ms)

                                                
                                                
-- stdout --
	multinode-103057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-103057-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr: exit status 7 (105.280202ms)

                                                
                                                
-- stdout --
	multinode-103057
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-103057-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:50:25.151544   23299 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:50:25.151716   23299 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:25.151726   23299 out.go:309] Setting ErrFile to fd 2...
	I0114 10:50:25.151738   23299 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:50:25.151846   23299 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-4002/.minikube/bin
	I0114 10:50:25.152017   23299 out.go:303] Setting JSON to false
	I0114 10:50:25.152041   23299 mustload.go:65] Loading cluster: multinode-103057
	I0114 10:50:25.152129   23299 notify.go:220] Checking for updates...
	I0114 10:50:25.152418   23299 config.go:180] Loaded profile config "multinode-103057": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I0114 10:50:25.152433   23299 status.go:255] checking status of multinode-103057 ...
	I0114 10:50:25.152761   23299 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:50:25.152818   23299 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:50:25.168650   23299 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0114 10:50:25.168973   23299 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:50:25.169483   23299 main.go:134] libmachine: Using API Version  1
	I0114 10:50:25.169511   23299 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:50:25.169827   23299 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:50:25.170024   23299 main.go:134] libmachine: (multinode-103057) Calling .GetState
	I0114 10:50:25.171662   23299 status.go:330] multinode-103057 host status = "Stopped" (err=<nil>)
	I0114 10:50:25.171679   23299 status.go:343] host is not running, skipping remaining checks
	I0114 10:50:25.171686   23299 status.go:257] multinode-103057 status: &{Name:multinode-103057 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:50:25.171700   23299 status.go:255] checking status of multinode-103057-m02 ...
	I0114 10:50:25.171962   23299 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0114 10:50:25.172009   23299 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:50:25.186521   23299 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33587
	I0114 10:50:25.186868   23299 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:50:25.187313   23299 main.go:134] libmachine: Using API Version  1
	I0114 10:50:25.187334   23299 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:50:25.187685   23299 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:50:25.187879   23299 main.go:134] libmachine: (multinode-103057-m02) Calling .GetState
	I0114 10:50:25.189298   23299 status.go:330] multinode-103057-m02 host status = "Stopped" (err=<nil>)
	I0114 10:50:25.189315   23299 status.go:343] host is not running, skipping remaining checks
	I0114 10:50:25.189322   23299 status.go:257] multinode-103057-m02 status: &{Name:multinode-103057-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (15.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (595.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103057 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0114 10:50:40.542994   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:52:10.118446   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:52:34.194781   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:54:07.068664   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
E0114 10:55:40.543410   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 10:57:34.194571   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 10:59:07.069645   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103057 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (9m54.568897276s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103057 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (595.13s)

                                                
                                    
x
+
TestPreload (195.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-110103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0114 11:02:34.194608   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-110103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m2.406525024s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-110103 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-110103 -- docker pull gcr.io/k8s-minikube/busybox: (2.076936807s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-110103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.24.6
E0114 11:03:43.591671   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 11:04:07.068814   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-110103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.24.6: (1m9.645123654s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-110103 -- docker images
helpers_test.go:175: Cleaning up "test-preload-110103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-110103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-110103: (1.137153534s)
--- PASS: TestPreload (195.50s)

                                                
                                    
x
+
TestScheduledStopUnix (125.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-110418 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-110418 --memory=2048 --driver=kvm2 : (53.351556203s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110418 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-110418 -n scheduled-stop-110418
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110418 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110418 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110418 -n scheduled-stop-110418
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-110418
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-110418 --schedule 15s
E0114 11:05:40.543549   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-110418
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-110418: exit status 7 (83.975291ms)

                                                
                                                
-- stdout --
	scheduled-stop-110418
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110418 -n scheduled-stop-110418
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-110418 -n scheduled-stop-110418: exit status 7 (82.389312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-110418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-110418
--- PASS: TestScheduledStopUnix (125.18s)

                                                
                                    
x
+
TestSkaffold (88.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2962011812 version
skaffold_test.go:63: skaffold version: v2.0.4
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-110623 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-110623 --memory=2600 --driver=kvm2 : (53.680219991s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2962011812 run --minikube-profile skaffold-110623 --kube-context skaffold-110623 --status-check=true --port-forward=false --interactive=false
E0114 11:07:34.194471   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2962011812 run --minikube-profile skaffold-110623 --kube-context skaffold-110623 --status-check=true --port-forward=false --interactive=false: (21.518700673s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-5766d98b54-tpgvr" [29cb3bb8-365a-4f59-bd72-88c8851998f8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011363565s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5b775f5bf-z2sdb" [38057dab-8e3f-4db0-bc52-3d08f4791b6e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006904907s
helpers_test.go:175: Cleaning up "skaffold-110623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-110623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-110623: (1.055990159s)
--- PASS: TestSkaffold (88.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (198.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.6.2.1867324278.exe start -p running-upgrade-111034 --memory=2200 --vm-driver=kvm2 
E0114 11:10:40.544040   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.6.2.1867324278.exe start -p running-upgrade-111034 --memory=2200 --vm-driver=kvm2 : (2m13.370255699s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-111034 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0114 11:12:51.842559   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-111034 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m1.731849172s)
helpers_test.go:175: Cleaning up "running-upgrade-111034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-111034
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-111034: (1.375382508s)
--- PASS: TestRunningBinaryUpgrade (198.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (199.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m11.871839024s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-111136

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-111136: (3.56760765s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-111136 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-111136 status --format={{.Host}}: exit status 7 (146.382232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 
E0114 11:13:02.082854   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:13:22.563512   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 : (1m18.88449832s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-111136 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (119.372951ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-111136] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-111136
	    minikube start -p kubernetes-upgrade-111136 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1111362 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-111136 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-111136 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 : (43.196584878s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-111136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-111136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-111136: (1.323667406s)
--- PASS: TestKubernetesUpgrade (199.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (178.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.6.2.1756672156.exe start -p stopped-upgrade-111246 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.6.2.1756672156.exe start -p stopped-upgrade-111246 --memory=2200 --vm-driver=kvm2 : (1m49.154544978s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.6.2.1756672156.exe -p stopped-upgrade-111246 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.6.2.1756672156.exe -p stopped-upgrade-111246 stop: (13.358702465s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-111246 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-111246 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (55.900746082s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (178.42s)

                                                
                                    
x
+
TestPause/serial/Start (83.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111353 --memory=2048 --install-addons=false --wait=all --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-111353 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m23.337898835s)
--- PASS: TestPause/serial/Start (83.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (92.524564ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-111418] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-4002/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-4002/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (68.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111418 --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111418 --driver=kvm2 : (1m8.469133513s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-111418 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (68.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (110.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2 : (1m50.642846769s)
--- PASS: TestNetworkPlugins/group/auto/Start (110.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-111353 --alsologtostderr -v=1 --driver=kvm2 
E0114 11:15:25.444902   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-111353 --alsologtostderr -v=1 --driver=kvm2 : (1m19.000904834s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (79.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --driver=kvm2 
E0114 11:15:40.543666   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --driver=kvm2 : (45.329044541s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-111418 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-111418 status -o json: exit status 2 (268.52089ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-111418","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-111418
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-111418: (1.071183545s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-111246
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-111246: (1.318821385s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2 : (1m41.147640387s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --driver=kvm2 
E0114 11:16:26.024269   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.029674   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.039967   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.060339   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.100633   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.181401   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.341874   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:26.662688   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:27.303300   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:28.583768   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:16:31.144925   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111418 --no-kubernetes --driver=kvm2 : (39.392742757s)
--- PASS: TestNoKubernetes/serial/Start (39.39s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-111353 --alsologtostderr -v=5
E0114 11:16:36.265160   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-111353 --alsologtostderr -v=5: (1.157084833s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-111353 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-111353 --output=json --layout=cluster: exit status 2 (343.996157ms)

                                                
                                                
-- stdout --
	{"Name":"pause-111353","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-111353","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-111353 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.81s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-111353 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-111353 --alsologtostderr -v=5: (1.015928321s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.31s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-111353 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-111353 --alsologtostderr -v=5: (1.306077347s)
--- PASS: TestPause/serial/DeletePaused (1.31s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (128.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2 : (2m8.599723435s)
--- PASS: TestNetworkPlugins/group/cilium/Start (128.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-110752 "pgrep -a kubelet"
E0114 11:16:46.505885   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-ctm7n" [8d29a0a5-6d16-4f3c-be81-e28a6ae5ee90] Pending
helpers_test.go:342: "netcat-5788d667bd-ctm7n" [8d29a0a5-6d16-4f3c-be81-e28a6ae5ee90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-ctm7n" [8d29a0a5-6d16-4f3c-be81-e28a6ae5ee90] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.007660071s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-111418 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-111418 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.723136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.368735411s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-111418
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-111418: (2.130360514s)
--- PASS: TestNoKubernetes/serial/Stop (2.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-111418 --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-111418 --driver=kvm2 : (41.484358415s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.179609567s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (373.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2 
E0114 11:17:06.986256   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:17:17.243269   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2 : (6m13.968995663s)
--- PASS: TestNetworkPlugins/group/calico/Start (373.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-6s5wg" [ecc9cda9-804f-4432-8fff-f7f71bdae227] Running
E0114 11:17:34.193913   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016189726s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vc64l" [a850e80b-4be1-4d51-a492-3bbbf1bbafd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-vc64l" [a850e80b-4be1-4d51-a492-3bbbf1bbafd9] Running
E0114 11:17:47.946979   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.013912878s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-111418 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-111418 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.588914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (112.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0114 11:17:41.601823   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m52.406304092s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (112.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (122.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=kvm2 
E0114 11:18:09.285412   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=kvm2 : (2m2.135835634s)
--- PASS: TestNetworkPlugins/group/false/Start (122.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-tc7jb" [88ef7e22-440a-4686-baa0-2e4b79df82e5] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.034042735s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (18.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-110752 replace --force -f testdata/netcat-deployment.yaml: (1.45003434s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jrq44" [3452fcb5-5b2d-4c76-b33c-77a437c0041d] Pending
helpers_test.go:342: "netcat-5788d667bd-jrq44" [3452fcb5-5b2d-4c76-b33c-77a437c0041d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:19:07.068666   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-jrq44" [3452fcb5-5b2d-4c76-b33c-77a437c0041d] Running
E0114 11:19:09.867101   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 17.011803505s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (18.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p flannel-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2 : (1m23.477375428s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r9np2" [87b6fe8e-8c90-43d3-82c2-78226500bd1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-r9np2" [87b6fe8e-8c90-43d3-82c2-78226500bd1b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.010332796s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2 : (1m22.034214733s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jv8zl" [788e1be1-92c7-4af6-ad1d-db4780143884] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jv8zl" [788e1be1-92c7-4af6-ad1d-db4780143884] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.010455503s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.169568616s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2 
E0114 11:20:23.592311   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2 : (1m20.1505121s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-6qml9" [0c043733-e732-437a-9d5b-49a3140f1733] Running
E0114 11:20:40.543701   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020328719s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-nf4tp" [1afbba7b-0c29-4b52-839e-e83ac7cec41b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-nf4tp" [1afbba7b-0c29-4b52-839e-e83ac7cec41b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.010929545s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (81.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-110752 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=kvm2 : (1m21.737890332s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (81.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-b45qm" [a51ac331-7fe5-4770-9083-fbc6cc028acf] Pending
helpers_test.go:342: "netcat-5788d667bd-b45qm" [a51ac331-7fe5-4770-9083-fbc6cc028acf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-b45qm" [a51ac331-7fe5-4770-9083-fbc6cc028acf] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007690337s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-112123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0114 11:21:26.024105   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-112123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m32.962191154s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-gjhwh" [66f25c21-fdef-46bd-8d09-409d571492d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-gjhwh" [66f25c21-fdef-46bd-8d09-409d571492d7] Running
E0114 11:21:46.966201   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:46.971506   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:46.981780   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:47.002097   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:47.042843   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:47.123272   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:47.283684   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:47.604847   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:48.245195   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.007968496s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-112150 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3
E0114 11:21:52.086367   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:21:53.707465   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
E0114 11:21:57.207297   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:22:07.447894   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-112150 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3: (1m42.01887677s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-tjnsd" [cf120efe-f0ac-4be1-b17b-315ea8611040] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:22:27.928614   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:22:30.798690   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:30.804006   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:30.814370   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:30.834646   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:30.874982   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:30.955397   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:31.116041   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:31.436936   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:32.077210   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:22:33.357884   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-tjnsd" [cf120efe-f0ac-4be1-b17b-315ea8611040] Running
E0114 11:22:34.194403   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 11:22:35.919033   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.010106908s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-tgdc8" [4ac2c517-d4e0-4078-bee2-41aee205d483] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018704218s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-110752 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-110752 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r69mv" [2a5a7bda-1460-40a4-8302-fafcb7f4a966] Pending
helpers_test.go:342: "netcat-5788d667bd-r69mv" [2a5a7bda-1460-40a4-8302-fafcb7f4a966] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-r69mv" [2a5a7bda-1460-40a4-8302-fafcb7f4a966] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.009111883s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-112150 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ee81fe72-8723-4890-98fb-4e350f0c6699] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:342: "busybox" [ee81fe72-8723-4890-98fb-4e350f0c6699] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.032076347s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-112150 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-110752 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-110752 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (77.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-112341 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-112341 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3: (1m17.275591686s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (77.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-112150 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-112150 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-112150 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-112150 --alsologtostderr -v=3: (13.371195253s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3
E0114 11:23:49.628215   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.633469   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.643710   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.663970   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.704208   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.784535   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:49.944975   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:50.265131   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:50.905508   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:52.186172   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:23:52.721928   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:23:54.746703   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3: (1m42.191128727s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112150 -n no-preload-112150
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112150 -n no-preload-112150: exit status 7 (88.32949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-112150 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (359.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-112150 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-112150 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3: (5m59.450552463s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-112150 -n no-preload-112150
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (359.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-112123 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [18278c5c-d550-45b1-9642-110096568731] Pending
helpers_test.go:342: "busybox" [18278c5c-d550-45b1-9642-110096568731] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0114 11:23:59.866898   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
helpers_test.go:342: "busybox" [18278c5c-d550-45b1-9642-110096568731] Running
E0114 11:24:07.069047   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.028708521s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-112123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-112123 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-112123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-112123 --alsologtostderr -v=3
E0114 11:24:10.107431   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-112123 --alsologtostderr -v=3: (4.565250245s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112123 -n old-k8s-version-112123
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112123 -n old-k8s-version-112123: exit status 7 (101.952073ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-112123 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-112123 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4: (1.181225482s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (111.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-112123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0114 11:24:30.587786   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:24:30.809758   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:24:33.459272   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.464704   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.474986   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.495269   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.535531   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.615833   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:33.776517   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:34.097180   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:34.737882   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:36.018355   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:38.579368   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:43.699667   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:53.837411   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:53.842738   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:53.852981   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:53.873286   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:53.913644   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:53.939879   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:24:53.994631   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:54.154978   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:54.475551   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:55.116236   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:24:56.397042   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-112123 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (1m50.999951171s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-112123 -n old-k8s-version-112123
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (111.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-112341 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f1e44db5-3e6b-4706-a523-1af51ee60d14] Pending
E0114 11:24:58.958129   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
helpers_test.go:342: "busybox" [f1e44db5-3e6b-4706-a523-1af51ee60d14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [f1e44db5-3e6b-4706-a523-1af51ee60d14] Running
E0114 11:25:04.078855   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.024981497s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-112341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-112341 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-112341 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-112341 --alsologtostderr -v=3
E0114 11:25:11.548970   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:25:14.319597   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:25:14.420231   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:25:14.642332   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-112341 --alsologtostderr -v=3: (4.132298364s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (4.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-112341 -n embed-certs-112341
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-112341 -n embed-certs-112341: exit status 7 (101.231051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-112341 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (333.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-112341 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-112341 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3: (5m32.998764162s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-112341 -n embed-certs-112341
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (333.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-112344 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [254e0c6a-18b1-49fe-b7fa-1093cecc82d6] Pending
helpers_test.go:342: "busybox" [254e0c6a-18b1-49fe-b7fa-1093cecc82d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0114 11:25:30.120389   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/addons-100636/client.crt: no such file or directory
helpers_test.go:342: "busybox" [254e0c6a-18b1-49fe-b7fa-1093cecc82d6] Running
E0114 11:25:34.800649   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.02220846s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-112344 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-112344 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-112344 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-112344 --alsologtostderr -v=3
E0114 11:25:39.197693   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.203060   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.213357   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.233827   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.274116   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.354453   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.514692   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:39.834818   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:40.475301   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:40.543551   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/ingress-addon-legacy-102330/client.crt: no such file or directory
E0114 11:25:41.756487   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:44.317037   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:25:49.437511   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-112344 --alsologtostderr -v=3: (13.219674253s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344: exit status 7 (108.313633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-112344 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3
E0114 11:25:55.380653   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:25:59.677679   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112344 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3: (5m27.863875828s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-cmmjc" [0754ee3a-fc3a-4d00-b50f-b7543d6dddea] Running
E0114 11:26:10.142959   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.148251   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.158522   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.178806   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.219146   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.299470   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:10.460191   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013554937s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-cmmjc" [0754ee3a-fc3a-4d00-b50f-b7543d6dddea] Running
E0114 11:26:10.780547   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:11.420711   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:12.701578   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:15.262080   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007848898s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-112123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-112123 "sudo crictl images -o json"
E0114 11:26:15.760828   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-112123 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112123 -n old-k8s-version-112123
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112123 -n old-k8s-version-112123: exit status 2 (271.951627ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112123 -n old-k8s-version-112123
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112123 -n old-k8s-version-112123: exit status 2 (277.07524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-112123 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-112123 -n old-k8s-version-112123
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-112123 -n old-k8s-version-112123
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (78.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3
E0114 11:26:30.623422   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:33.381598   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.386893   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.397222   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.417559   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.457844   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.469115   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/cilium-110752/client.crt: no such file or directory
E0114 11:26:33.538337   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:33.699342   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:34.020004   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:34.661148   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:35.942175   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:38.503125   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:43.624086   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:26:46.965881   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:26:51.104492   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:26:53.865284   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:27:01.119494   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:27:14.346343   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:27:14.650566   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/auto-110752/client.crt: no such file or directory
E0114 11:27:17.301423   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
E0114 11:27:24.993538   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:24.998769   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.009022   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.030087   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.070387   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.150658   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.311112   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:25.631225   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:26.272195   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:27.553366   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:30.113881   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:30.798001   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:27:32.064988   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
E0114 11:27:34.194156   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/functional-101929/client.crt: no such file or directory
E0114 11:27:35.234663   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:27:37.681604   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/false-110752/client.crt: no such file or directory
E0114 11:27:41.601606   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/skaffold-110623/client.crt: no such file or directory
E0114 11:27:45.475572   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3: (1m18.040742739s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (78.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-112629 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-112629 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-112629 --alsologtostderr -v=3: (4.120972491s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112629 -n newest-cni-112629
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112629 -n newest-cni-112629: exit status 7 (94.592508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-112629 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3
E0114 11:27:55.306994   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
E0114 11:27:58.482474   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kindnet-110752/client.crt: no such file or directory
E0114 11:28:05.956630   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
E0114 11:28:20.759344   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:20.764577   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:20.774865   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:20.795223   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:20.836273   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:20.916630   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:21.077120   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:21.397612   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:22.038123   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:23.040258   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
E0114 11:28:23.318500   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:25.879401   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
E0114 11:28:31.000539   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3: (40.065715627s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112629 -n newest-cni-112629
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-112629 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-112629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112629 -n newest-cni-112629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112629 -n newest-cni-112629: exit status 2 (264.239649ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112629 -n newest-cni-112629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112629 -n newest-cni-112629: exit status 2 (270.020194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-112629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112629 -n newest-cni-112629
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112629 -n newest-cni-112629
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-7gmcf" [c0d2c922-1626-4e85-856f-46f1c1d3ce52] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 11:30:01.141674   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/custom-flannel-110752/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-7gmcf" [c0d2c922-1626-4e85-856f-46f1c1d3ce52] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.016228856s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-7gmcf" [c0d2c922-1626-4e85-856f-46f1c1d3ce52] Running
E0114 11:30:08.838183   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/kubenet-110752/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008317944s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-112150 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-112150 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-112150 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112150 -n no-preload-112150
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112150 -n no-preload-112150: exit status 2 (293.196339ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112150 -n no-preload-112150
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112150 -n no-preload-112150: exit status 2 (275.795845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-112150 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-112150 -n no-preload-112150
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-112150 -n no-preload-112150
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-cxmnr" [9a369d59-ab11-45c6-9596-b8a25e15d877] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-cxmnr" [9a369d59-ab11-45c6-9596-b8a25e15d877] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.01567035s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-cxmnr" [9a369d59-ab11-45c6-9596-b8a25e15d877] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007883113s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-112341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-112341 "sudo crictl images -o json"
E0114 11:31:04.603381   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/calico-110752/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-112341 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-112341 -n embed-certs-112341
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-112341 -n embed-certs-112341: exit status 2 (260.543275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-112341 -n embed-certs-112341
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-112341 -n embed-certs-112341: exit status 2 (253.773489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-112341 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-112341 -n embed-certs-112341
E0114 11:31:06.880684   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/flannel-110752/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-112341 -n embed-certs-112341
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qtqrh" [e5c83e86-b7f3-49e2-ad8a-49c6ef7327b9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qtqrh" [e5c83e86-b7f3-49e2-ad8a-49c6ef7327b9] Running
E0114 11:31:26.024353   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/gvisor-110944/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.014109442s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-qtqrh" [e5c83e86-b7f3-49e2-ad8a-49c6ef7327b9] Running
E0114 11:31:33.380910   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/bridge-110752/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006834849s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-112344 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-112344 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-112344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
E0114 11:31:37.826109   10851 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-4002/.minikube/profiles/enable-default-cni-110752/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344: exit status 2 (252.251276ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344: exit status 2 (253.150933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-112344 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112344 -n default-k8s-diff-port-112344
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    

Test skip (27/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-112344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-112344
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard