Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gci-gke-autoscaling

Passed Failed Avg Time (s) Test
4 10 1033 Up
0 4 16417 Test
0 2 1038 [k8s.io] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [k8s.io] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
3 1 835 [k8s.io] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
0 1 0 AfterSuite
14 0 107 Deferred TearDown
14 0 17 Extract
14 0 8 ListResources Before
14 0 6 TearDown Previous
10 0 25 DumpClusterLogs (--up failed)
4 0 968 [k8s.io] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
4 0 997 [k8s.io] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
4 0 1581 [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
4 0 706 [k8s.io] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
4 0 496 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
4 0 286 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
4 0 1580 [k8s.io] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
4 0 0 DiffResources
4 0 34 DumpClusterLogs
4 0 0 get kubeconfig
4 0 0 IsUp
4 0 0 kubectl version
4 0 0 list nodes
4 0 9 ListResources After
4 0 9 ListResources Down
4 0 7 ListResources Up
4 0 178 TearDown
3 0 961 [k8s.io] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
3 0 1036 [k8s.io] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
3 0 843 [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
3 0 1098 [k8s.io] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
3 0 369 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
3 0 284 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
3 0 312 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
3 0 299 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
3 0 636 [k8s.io] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
3 0 201 [k8s.io] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
3 0 1231 [k8s.io] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
3 0 319 [k8s.io] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]