Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gke-serial

Passed Failed Avg Time (s) Test
4 7 1229 Up
0 4 70 [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity
0 4 17541 Test
0 3 961 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
0 3 67 [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation
0 3 68 [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate
0 3 343 [k8s.io] Services should work after restarting apiserver [Disruptive]
2 2 241 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume]
0 2 965 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
0 2 954 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
0 2 970 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
0 2 181 [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial]
0 2 173 [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial]
0 2 86 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms
0 2 87 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity
0 2 351 [k8s.io] Services should work after restarting kube-proxy [Disruptive]
3 1 1506 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
1 1 1674 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
0 1 961 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
0 1 949 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
0 1 0 AfterSuite
11 0 102 Deferred TearDown
11 0 17 Extract
11 0 7 ListResources Before
11 0 7 TearDown Previous
7 0 11 DumpClusterLogs (--up failed)
4 0 29 [k8s.io] Daemon set [Serial] Should adopt existing pods when creating a RollingUpdate DaemonSet regardless of templateGeneration
4 0 36 [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate
4 0 309 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume]
4 0 52 [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node
4 0 1403 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
4 0 2267 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
4 0 259 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
4 0 86 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
4 0 0 DiffResources
4 0 8 DumpClusterLogs
4 0 0 get kubeconfig
4 0 0 IsUp
4 0 0 kubectl version
4 0 0 list nodes
4 0 6 ListResources After
4 0 10 ListResources Down
4 0 7 ListResources Up
4 0 184 TearDown
3 0 30 [k8s.io] Daemon set [Serial] should retry creating failed daemon pods
3 0 38 [k8s.io] Daemon set [Serial] Should rollback without unnecessary restarts
3 0 29 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
3 0 309 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume]
3 0 492 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume]
3 0 13 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node
3 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node
3 0 1208 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
3 0 1457 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
3 0 159 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
3 0 751 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
3 0 50 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
3 0 158 [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction
3 0 111 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
3 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
3 0 87 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
3 0 83 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
3 0 87 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
3 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
3 0 85 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
3 0 89 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
2 0 27 [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete
2 0 24 [k8s.io] Daemon set [Serial] should run and stop complex daemon
2 0 70 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
2 0 70 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
2 0 41 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
2 0 20 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
2 0 786 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
2 0 149 [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes
2 0 115 [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
2 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
2 0 85 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
2 0 88 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
1 0 29 [k8s.io] Daemon set [Serial] should run and stop simple daemon
1 0 316 [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
1 0 149 [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes
1 0 70 [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes
1 0 90 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
1 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
1 0 99 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching