Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gce-1-5-1-6-cvm-kubectl-skew

Passed Failed Avg Time (s) Test
0 12 859 [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
0 12 326 [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
0 12 145 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
0 12 157 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
0 12 140 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
0 12 113 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
0 12 135 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
0 12 108 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
0 12 144 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
0 12 87 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
0 12 201 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
0 12 115 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
0 12 174 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
0 12 179 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
0 12 147 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
0 12 143 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
0 12 4878 Test
1 11 325 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
1 11 384 [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow]
0 11 692 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
0 11 126 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
0 11 122 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
2 10 407 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
2 10 561 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow]
2 10 1276 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
2 10 361 [k8s.io] Services should be able to change the type and ports of a service [Slow]
0 10 88 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
3 9 239 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance]
3 9 377 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
3 9 531 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http
3 9 253 [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors
3 9 288 [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow]
3 9 467 [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
3 9 768 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity
2 9 849 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
4 8 457 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
4 8 361 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
4 8 425 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http
4 8 317 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow]
4 8 470 [k8s.io] Services should be able to up and down services
3 8 926 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
3 8 343 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
3 8 1086 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
1 8 448 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
5 7 383 [k8s.io] CronJob should not schedule jobs when suspended [Slow]
5 7 210 [k8s.io] CronJob should schedule multiple jobs concurrently
5 7 108 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
5 7 574 [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow]
5 7 756 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
5 7 460 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http
5 7 383 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
5 7 571 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale
4 7 405 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
4 7 387 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
4 7 302 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
4 7 400 [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow]
4 7 204 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
6 6 178 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
6 6 272 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
6 6 234 [k8s.io] Etcd failure [Disruptive] should recover from network partition with master
6 6 406 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
6 6 929 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive]
6 6 401 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp
6 6 309 [k8s.io] Networking should check kube-proxy urls
6 6 593 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
6 6 209 [k8s.io] Services should be able to create a functioning NodePort service
6 6 155 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
6 6 587 [k8s.io] Services should work after restarting kube-proxy [Disruptive]
6 6 646 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates
6 6 244 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted
5 6 811 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
5 6 379 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
5 6 740 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
5 6 220 [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow]
4 6 317 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
4 6 206 [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow]
7 5 191 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
7 5 295 [k8s.io] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
7 5 116 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
7 5 244 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
7 5 168 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
7 5 423 [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow]
7 5 90 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance]
7 5 131 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
7 5 205 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
7 5 246 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
7 5 364 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
7 5 208 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
7 5 354 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
7 5 262 [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
7 5 196 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
7 5 234 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
7 5 580 [k8s.io] Services should work after restarting apiserver [Disruptive]
7 5 316 [k8s.io] V1Job should scale a job down
6 5 284 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
6 5 104 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
6 5 102 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
6 5 195 [k8s.io] V1Job should delete a job
5 5 499 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy
8 4 347 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
8 4 221 [k8s.io] ConfigMap updates should be reflected in volume [Conformance]
8 4 206 [k8s.io] Daemon set [Serial] should run and stop simple daemon
8 4 151 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
8 4 257 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
8 4 142 [k8s.io] DNS should provide DNS for ExternalName services
8 4 127 [k8s.io] DNS should provide DNS for services [Conformance]
8 4 98 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
8 4 199 [k8s.io] Downward API volume should provide container's cpu request [Conformance]
8 4 136 [k8s.io] Downward API volume should provide container's memory limit [Conformance]
8 4 162 [k8s.io] Downward API volume should update annotations on modification [Conformance]
8 4 166 [k8s.io] HostPath should give a volume the correct mode [Conformance]
8 4 104 [k8s.io] HostPath should support r/w
8 4 138 [k8s.io] Job should fail a job
8 4 125 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
8 4 76 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
8 4 103 [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node
8 4 181 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
8 4 305 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
8 4 69 [k8s.io] Networking should provide Internet connection for containers [Conformance]
8 4 126 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance]
8 4 199 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
8 4 145 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
8 4 74 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
8 4 169 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
8 4 183 [k8s.io] Stateful Set recreate should recreate evicted statefulset
8 4 128 [k8s.io] V1Job should fail a job
7 4 453 [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
7 4 336 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
7 4 118 [k8s.io] Deployment paused deployment should be ignored by the controller
7 4 439 [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
7 4 161 [k8s.io] Downward API volume should provide container's cpu limit [Conformance]
7 4 216 [k8s.io] EmptyDir wrapper volumes should not conflict
7 4 208 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
7 4 193 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
7 4 126 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
7 4 234 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
7 4 153 [k8s.io] Pods should be updated [Conformance]
7 4 150 [k8s.io] Services should release NodePorts on delete
7 4 114 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
6 4 144 [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance]
0 4 0 AfterSuite
9 3 137 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance]
9 3 140 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance]
9 3 198 [k8s.io] Deployment iterative rollouts should eventually progress
9 3 84 [k8s.io] DNS should provide DNS for the cluster [Conformance]
9 3 134 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
9 3 159 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
9 3 168 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
9 3 85 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance]
9 3 106 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
9 3 150 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
9 3 162 [k8s.io] Job should scale a job down
9 3 77 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
9 3 150 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
9 3 150 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
9 3 57 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
9 3 884 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
9 3 81 [k8s.io] Network should set TCP CLOSE_WAIT timeout
9 3 133 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
9 3 116 [k8s.io] Pods should be submitted and removed [Conformance]
9 3 154 [k8s.io] Pods should contain environment variables for services [Conformance]
9 3 120 [k8s.io] Pods should support remote command execution over websockets
9 3 151 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance]
9 3 178 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
9 3 126 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
9 3 142 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
9 3 122 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
9 3 90 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
9 3 98 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
9 3 108 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
9 3 113 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
9 3 154 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance]
9 3 78 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance]
9 3 157 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
8 3 159 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance]
8 3 102 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
8 3 149 [k8s.io] Daemon set [Serial] should run and stop complex daemon
8 3 76 [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order
8 3 259 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
8 3 117 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance]
8 3 90 [k8s.io] HostPath should support subPath
8 3 232 [k8s.io] Job should scale a job up
8 3 61 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
8 3 63 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
8 3 129 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
8 3 115 [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node
8 3 173 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance]
8 3 0 DiffResources
7 3 124 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
10 2 54 [k8s.io] Cadvisor should be healthy on every node.
10 2 148 [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
10 2 126 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance]
10 2 231 [k8s.io] CronJob should not emit unexpected warnings
10 2 94 [k8s.io] Deployment deployment should label adopted RSs and pods
10 2 144 [k8s.io] Deployment deployment should support rollover
10 2 143 [k8s.io] Deployment lack of progress should be reported in the deployment status
10 2 74 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
10 2 315 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
10 2 116 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
10 2 58 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
10 2 68 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
10 2 101 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
10 2 64 [k8s.io] Downward API volume should provide podname only [Conformance]
10 2 58 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance]
10 2 108 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance]
10 2 90 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance]
10 2 58 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance]
10 2 77 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance]
10 2 85 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance]
10 2 46 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
10 2 98 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
10 2 63 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
10 2 75 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
10 2 85 [k8s.io] Job should delete a job
10 2 84 [k8s.io] Job should run a job to completion when tasks succeed
10 2 131 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
10 2 78 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
10 2 81 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
10 2 134 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
10 2 85 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
10 2 63 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
10 2 103 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
10 2 50 [k8s.io] MetricsGrabber should grab all metrics from API server.
10 2 89 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
10 2 181 [k8s.io] Pods should support retrieving logs from the container over websockets
10 2 166 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
10 2 193 [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow]
10 2 37 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
10 2 50 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
10 2 266 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
10 2 53 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
10 2 62 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.
10 2 64 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
10 2 80 [k8s.io] Secrets should be consumable from pods in volume [Conformance]
10 2 111 [k8s.io] Services should create endpoints for unready pods
10 2 42 [k8s.io] Sysctls should reject invalid sysctls
10 2 298 [k8s.io] V1Job should run a job to completion when tasks succeed
10 2 71 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
9 2 108 [k8s.io] Downward API volume should set mode on item file [Conformance]
9 2 52 [k8s.io] Downward API volume should update labels on modification [Conformance]
9 2 64 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
9 2 199 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
9 2 100 [k8s.io] Pods should get a host IP [Conformance]
9 2 176 [k8s.io] PrivilegedPod should test privileged pod
9 2 47 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
9 2 197 [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
9 2 61 [k8s.io] Services should prevent NodePort collisions
9 2 70 [k8s.io] SSH should SSH to all nodes and run commands
9 2 368 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed
9 2 83 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
8 2 236 [k8s.io] DisruptionController should update PodDisruptionBudget status
8 2 89 [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance]
11 1 42 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance]
11 1 55 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
11 1 38 [k8s.io] Deployment deployment should create new pods
11 1 54 [k8s.io] Deployment deployment should delete old replica sets
11 1 63 [k8s.io] Deployment overlapping deployment should not fight with each other
11 1 31 [k8s.io] DisruptionController should create a PodDisruptionBudget
11 1 42 [k8s.io] DNS config map should be able to change configuration
11 1 105 [k8s.io] Downward API should provide pod IP as an env var [Conformance]
11 1 76 [k8s.io] Downward API volume should set DefaultMode on files [Conformance]
11 1 78 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance]
11 1 84 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
11 1 117 [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods
11 1 173 [k8s.io] Job should keep restarting failed pods
11 1 25 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
11 1 104 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
11 1 40 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
11 1 118 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
11 1 31 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
11 1 36 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
11 1 75 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
11 1 95 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
11 1 32 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
11 1 26 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
11 1 13 [k8s.io] Proxy version v1 should proxy to cadvisor
11 1 24 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
11 1 47 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
11 1 48 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
11 1 68 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace
11 1 126 [k8s.io] Service endpoints latency should not be very high [Conformance]
11 1 45 [k8s.io] ServiceAccounts should ensure a single API token exists
11 1 29 [k8s.io] Services should check NodePort out-of-range
11 1 160 [k8s.io] V1Job should keep restarting failed pods
11 1 155 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted
10 1 214 [k8s.io] Addon update should propagate add-on file changes [Slow]
10 1 116 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
10 1 61 [k8s.io] Downward API volume should provide container's memory request [Conformance]
10 1 25 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
10 1 142 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
10 1 96 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
10 1 61 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
10 1 26 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
10 1 26 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
10 1 112 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
6 1 164 [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node
0 1 459 [k8s.io] Sysctls should support sysctls
12 0 68 [k8s.io] CronJob should remove from active list jobs that have been deleted
12 0 185 [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
12 0 22 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
12 0 49 [k8s.io] Deployment deployment should support rollback
12 0 31 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance]
12 0 14 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
12 0 24 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
12 0 40 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
12 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
12 0 14 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
12 0 22 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
12 0 26 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
12 0 33 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
12 0 112 [k8s.io] V1Job should scale a job up
12 0 0 Deferred TearDown
12 0 24 DumpClusterLogs
12 0 74 Extract
12 0 0 get kubeconfig
12 0 0 IsUp
12 0 0 kubectl version
12 0 0 list nodes
12 0 7 ListResources After
12 0 8 ListResources Before
12 0 7 ListResources Down
12 0 8 ListResources Up
12 0 319 TearDown
12 0 46 TearDown Previous
12 0 287 Up
11 0 19 [k8s.io] Deployment paused deployment should be able to scale
11 0 32 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
11 0 130 [k8s.io] Etcd failure [Disruptive] should recover from SIGKILL
11 0 12 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
11 0 13 [k8s.io] Services should provide secure master service [Conformance]
11 0 13 [k8s.io] Services should use same NodePort with same port but different protocols