Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gce-1-5-1-6-upgrade-cluster

Passed Failed Avg Time (s) Test
0 4 20392 Test
1 3 1226 [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
1 3 1391 UpgradeTest
0 3 0 DiffResources
0 1 8 [k8s.io] CronJob should not emit unexpected warnings
0 1 86 [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
0 1 62 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
0 1 338 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
0 1 6 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
0 1 6 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
0 1 11 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
0 1 8 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node
0 1 6 [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node
0 1 6 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
0 1 6 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
0 1 6 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
0 1 6 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
0 1 6 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
0 1 6 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
0 1 6 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
0 1 166 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
0 1 152 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
0 1 136 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
0 1 182 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive]
0 1 197 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
0 1 40 [k8s.io] Network should set TCP CLOSE_WAIT timeout
0 1 37 [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow]
0 1 54 [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
0 1 23 [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance]
0 1 23 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance]
0 1 24 [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance]
0 1 129 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
0 1 96 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
0 1 98 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
0 1 51 [k8s.io] Services should be able to create a functioning NodePort service
0 1 400 [k8s.io] Services should create endpoints for unready pods
0 1 175 [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
0 1 342 [k8s.io] Services should release NodePorts on delete
0 1 1846 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed
0 1 631 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy
0 1 312 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
4 0 0 Deferred TearDown
4 0 34 DumpClusterLogs
4 0 79 Extract
4 0 0 get kubeconfig
4 0 0 IsUp
4 0 1 kubectl version
4 0 0 list nodes
4 0 7 ListResources After
4 0 8 ListResources Before
4 0 7 ListResources Down
4 0 6 ListResources Up
4 0 269 TearDown
4 0 29 TearDown Previous
4 0 277 Up
1 0 392 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
1 0 532 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
1 0 422 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
1 0 722 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
1 0 889 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
1 0 168 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
1 0 158 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
1 0 140 [k8s.io] Addon update should propagate add-on file changes [Slow]
1 0 10 [k8s.io] Cadvisor should be healthy on every node.
1 0 77 [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
1 0 7 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance]
1 0 7 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance]
1 0 7 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
1 0 7 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
1 0 24 [k8s.io] ConfigMap updates should be reflected in volume [Conformance]
1 0 44 [k8s.io] Daemon set [Serial] should run and stop complex daemon
1 0 74 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
1 0 51 [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
1 0 17 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
1 0 9 [k8s.io] Deployment deployment should create new pods
1 0 19 [k8s.io] Deployment deployment should delete old replica sets
1 0 21 [k8s.io] Deployment deployment should label adopted RSs and pods
1 0 32 [k8s.io] Deployment deployment should support rollback
1 0 26 [k8s.io] Deployment paused deployment should be ignored by the controller
1 0 23 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
1 0 23 [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order
1 0 40 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
1 0 35 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
1 0 36 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
1 0 18 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
1 0 89 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
1 0 342 [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
1 0 23 [k8s.io] DNS should provide DNS for ExternalName services
1 0 9 [k8s.io] DNS should provide DNS for the cluster [Conformance]
1 0 7 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
1 0 7 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
1 0 7 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
1 0 7 [k8s.io] Downward API should provide pod IP as an env var [Conformance]
1 0 7 [k8s.io] Downward API volume should provide container's cpu limit [Conformance]
1 0 7 [k8s.io] Downward API volume should provide container's memory limit [Conformance]
1 0 7 [k8s.io] Downward API volume should provide container's memory request [Conformance]
1 0 7 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
1 0 7 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
1 0 7 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance]
1 0 9 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance]
1 0 7 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance]
1 0 7 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
1 0 7 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance]
1 0 7 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
1 0 7 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance]
1 0 9 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
1 0 169 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
1 0 16 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
1 0 11 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
1 0 7 [k8s.io] HostPath should give a volume the correct mode [Conformance]
1 0 7 [k8s.io] HostPath should support r/w
1 0 7 [k8s.io] HostPath should support subPath
1 0 9 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
1 0 8 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
1 0 72 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
1 0 9 [k8s.io] Job should delete a job
1 0 18 [k8s.io] Job should keep restarting failed pods
1 0 18 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
1 0 22 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
1 0 9 [k8s.io] Job should run a job to completion when tasks succeed
1 0 1212 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
1 0 1486 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
1 0 17 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
1 0 25 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
1 0 10 [k8s.io] MetricsGrabber should grab all metrics from API server.
1 0 10 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
1 0 155 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
1 0 33 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
1 0 63 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
1 0 67 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
1 0 65 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
1 0 87 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
1 0 94 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
1 0 91 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
1 0 82 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
1 0 9 [k8s.io] Networking should provide Internet connection for containers [Conformance]
1 0 95 [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors
1 0 329 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
1 0 511 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
1 0 155 [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow]
1 0 123 [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow]
1 0 17 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
1 0 17 [k8s.io] Pods should be submitted and removed [Conformance]
1 0 23 [k8s.io] Pods should be updated [Conformance]
1 0 1644 [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
1 0 22 [k8s.io] Pods should get a host IP [Conformance]
1 0 232 [k8s.io] Pods should support retrieving logs from the container over websockets
1 0 47 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
1 0 39 [k8s.io] PrivilegedPod should test privileged pod
1 0 132 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
1 0 132 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
1 0 32 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
1 0 164 [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow]
1 0 85 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
1 0 10 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
1 0 10 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
1 0 10 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
1 0 38 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
1 0 10 [k8s.io] Proxy version v1 should proxy to cadvisor
1 0 10 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
1 0 34 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
1 0 24 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
1 0 42 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
1 0 31 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
1 0 145 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
1 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
1 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.
1 0 18 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
1 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
1 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
1 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
1 0 7 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
1 0 26 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
1 0 27 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
1 0 265 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
1 0 80 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
1 0 88 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
1 0 89 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
1 0 17 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace
1 0 7 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
1 0 7 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
1 0 7 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance]
1 0 7 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance]
1 0 62 [k8s.io] Service endpoints latency should not be very high [Conformance]
1 0 17 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
1 0 408 [k8s.io] Services should be able to change the type and ports of a service [Slow]
1 0 31 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
1 0 33 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
1 0 10 [k8s.io] Services should use same NodePort with same port but different protocols
1 0 25 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
1 0 16 [k8s.io] V1Job should delete a job
1 0 63 [k8s.io] V1Job should fail a job
1 0 18 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted
1 0 16 [k8s.io] V1Job should run a job to completion when tasks succeed
1 0 59 [k8s.io] V1Job should scale a job up
1 0 7 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]