Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gce-gci-qa-serial-m54

Passed Failed Avg Time (s) Test
0 1 1207 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
0 1 1476 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
0 1 1284 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node
0 1 11145 Test
1 0 432 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
1 0 713 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
1 0 624 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
1 0 722 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
1 0 910 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
1 0 1370 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
1 0 28 [k8s.io] Daemon set [Serial] should run and stop complex daemon
1 0 25 [k8s.io] Daemon set [Serial] should run and stop simple daemon
1 0 56 [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
1 0 248 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
1 0 65 [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
1 0 76 [k8s.io] Etcd failure [Disruptive] should recover from network partition with master
1 0 69 [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL
1 0 150 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
1 0 26 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
1 0 6 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
1 0 403 [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster
1 0 49 [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
1 0 309 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
1 0 328 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
1 0 50 [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
1 0 173 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
1 0 21 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
1 0 15 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
1 0 16 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
1 0 8 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
1 0 16 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
1 0 7 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
1 0 15 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
1 0 7 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
1 0 9 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
1 0 18 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
1 0 90 [k8s.io] Services should work after restarting apiserver [Disruptive]
1 0 87 [k8s.io] Services should work after restarting kube-proxy [Disruptive]
1 0 0 Deferred TearDown
1 0 0 DiffResources
1 0 18 DumpClusterLogs
1 0 46 Extract
1 0 0 get kubeconfig
1 0 0 IsUp
1 0 0 kubectl version
1 0 0 list nodes
1 0 6 ListResources After
1 0 5 ListResources Before
1 0 6 ListResources Down
1 0 6 ListResources Up
1 0 334 TearDown
1 0 63 TearDown Previous
1 0 288 Up