Passed |
Failed |
Avg Time (s) |
Test |
0 |
3 |
2786 |
[k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] |
0 |
3 |
2609 |
hpa-upgrade |
0 |
3 |
35177 |
Test |
0 |
3 |
2910 |
UpgradeTest |
0 |
1 |
7 |
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] |
0 |
1 |
7 |
[k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] |
0 |
1 |
13 |
[k8s.io] Network should set TCP CLOSE_WAIT timeout |
0 |
1 |
43 |
[k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] [Volume] |
0 |
1 |
61 |
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects |
0 |
1 |
537 |
[k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] |
0 |
1 |
372 |
[k8s.io] Services should work after restarting kube-proxy [Disruptive] |
0 |
1 |
0 |
AfterSuite |
3 |
0 |
1200 |
cluster-upgrade |
3 |
0 |
1670 |
configmap-upgrade |
3 |
0 |
1662 |
daemonset-upgrade |
3 |
0 |
0 |
Deferred TearDown |
3 |
0 |
1664 |
deployment-upgrade |
3 |
0 |
0 |
DiffResources |
3 |
0 |
9 |
DumpClusterLogs |
3 |
0 |
70 |
Extract |
3 |
0 |
0 |
get kubeconfig |
3 |
0 |
1728 |
ingress-upgrade |
3 |
0 |
0 |
IsUp |
3 |
0 |
1662 |
job-upgrade |
3 |
0 |
1 |
kubectl version |
3 |
0 |
0 |
list nodes |
3 |
0 |
7 |
ListResources After |
3 |
0 |
6 |
ListResources Before |
3 |
0 |
8 |
ListResources Down |
3 |
0 |
6 |
ListResources Up |
3 |
0 |
1696 |
persistent-volume-upgrade |
3 |
0 |
1670 |
secret-upgrade |
3 |
0 |
1662 |
service-upgrade |
3 |
0 |
1664 |
statefulset-upgrade |
3 |
0 |
182 |
TearDown |
3 |
0 |
9 |
TearDown Previous |
3 |
0 |
409 |
Up |
1 |
0 |
17 |
[k8s.io] Certificates API should support building a client with a CSR |
1 |
0 |
9 |
[k8s.io] ConfigMap should be consumable via the environment [Conformance] |
1 |
0 |
39 |
[k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios |
1 |
0 |
29 |
[k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] |
1 |
0 |
9 |
[k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume] |
1 |
0 |
17 |
[k8s.io] EmptyDir wrapper volumes should not conflict [Volume] |
1 |
0 |
11 |
[k8s.io] Job should delete a job |
1 |
0 |
1451 |
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node |
1 |
0 |
758 |
[k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive] |
1 |
0 |
83 |
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] |
1 |
0 |
7 |
[k8s.io] Proxy version v1 should proxy logs on node [Conformance] |
1 |
0 |
108 |
[k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover |
1 |
0 |
89 |
[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] |
1 |
0 |
84 |
[k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching |
1 |
0 |
88 |
[k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] |
1 |
0 |
9 |
[k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] |
1 |
0 |
108 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails |