Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gci-gce-serial-release-1-4

Passed Failed Avg Time (s) Test
0 1 0 DiffResources
1 0 432 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
1 0 552 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
1 0 612 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
1 0 537 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
1 0 868 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
1 0 1388 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
1 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon
1 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
1 0 25 [k8s.io] Daemon set [Serial] should run and stop simple daemon
1 0 46 [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
1 0 262 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
1 0 46 [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
1 0 74 [k8s.io] Etcd failure [Disruptive] should recover from network partition with master
1 0 74 [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL
1 0 1207 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
1 0 1476 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
1 0 150 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
1 0 23 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
1 0 6 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
1 0 400 [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster
1 0 68 [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
1 0 362 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
1 0 615 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
1 0 81 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
1 0 46 [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
1 0 209 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
1 0 35 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
1 0 15 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
1 0 15 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
1 0 23 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
1 0 23 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
1 0 30 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
1 0 22 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
1 0 24 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
1 0 23 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
1 0 53 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
1 0 30 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
1 0 22 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
1 0 30 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
1 0 23 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
1 0 38 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
1 0 42 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
1 0 50 [k8s.io] Services should work after restarting apiserver [Disruptive]
1 0 57 [k8s.io] Services should work after restarting kube-proxy [Disruptive]
1 0 0 Deferred TearDown
1 0 36 Extract
1 0 0 get kubeconfig
1 0 0 IsUp
1 0 0 kubectl version
1 0 0 list nodes
1 0 6 ListResources After
1 0 7 ListResources Before
1 0 6 ListResources Down
1 0 6 ListResources Up
1 0 303 TearDown
1 0 25 TearDown Previous
1 0 10390 Test
1 0 277 Up