Passed |
Failed |
Avg Time (s) |
Test |
2 |
4 |
13931 |
Test |
3 |
3 |
1002 |
[k8s.io] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [k8s.io] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes |
5 |
1 |
1296 |
[k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown] |
5 |
1 |
404 |
[k8s.io] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
1624 |
[k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
1065 |
[k8s.io] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown] |
6 |
0 |
1033 |
[k8s.io] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown] |
6 |
0 |
1040 |
[k8s.io] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown] |
6 |
0 |
933 |
[k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown] |
6 |
0 |
469 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
450 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
361 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
344 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
328 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
508 |
[k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
1595 |
[k8s.io] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown] |
6 |
0 |
198 |
[k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
1176 |
[k8s.io] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp] |
6 |
0 |
0 |
Deferred TearDown |
6 |
0 |
0 |
DiffResources |
6 |
0 |
16 |
Extract |
6 |
0 |
0 |
get kubeconfig |
6 |
0 |
0 |
IsUp |
6 |
0 |
0 |
kubectl version |
6 |
0 |
0 |
list nodes |
6 |
0 |
9 |
ListResources After |
6 |
0 |
7 |
ListResources Before |
6 |
0 |
11 |
ListResources Down |
6 |
0 |
8 |
ListResources Up |
6 |
0 |
342 |
TearDown |
6 |
0 |
20 |
TearDown Previous |
6 |
0 |
349 |
Up |
4 |
0 |
44 |
DumpClusterLogs |