Passed |
Failed |
Avg Time (s) |
Test |
0 |
1 |
1212 |
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node |
0 |
1 |
1496 |
[k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node |
0 |
1 |
312 |
[k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster. |
0 |
1 |
371 |
[k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching |
0 |
1 |
371 |
[k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching |
0 |
1 |
32364 |
Test |
2 |
0 |
28 |
[k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume] |
2 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume] |
2 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume] |
2 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume] |
1 |
0 |
623 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 |
1 |
0 |
747 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 |
1 |
0 |
402 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 |
1 |
0 |
748 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 |
1 |
0 |
902 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability |
1 |
0 |
1092 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability |
1 |
0 |
130 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods |
1 |
0 |
130 |
[k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod |
1 |
0 |
167 |
[k8s.io] Addon update should propagate add-on file changes [Slow] |
1 |
0 |
43 |
[k8s.io] AppArmor should enforce an AppArmor profile |
1 |
0 |
10 |
[k8s.io] Cadvisor should be healthy on every node. |
1 |
0 |
26 |
[k8s.io] Certificates API should support building a client with a CSR |
1 |
0 |
243 |
[k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL |
1 |
0 |
24 |
[k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume] |
1 |
0 |
8 |
[k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume] |
1 |
0 |
9 |
[k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume] |
1 |
0 |
8 |
[k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable via environment variable [Conformance] |
1 |
0 |
7 |
[k8s.io] ConfigMap should be consumable via the environment [Conformance] |
1 |
0 |
31 |
[k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume] |
1 |
0 |
113 |
[k8s.io] CronJob should adopt Jobs it owns that don't have ControllerRef yet |
1 |
0 |
120 |
[k8s.io] CronJob should delete successful finished jobs with limit of one successful job |
1 |
0 |
120 |
[k8s.io] CronJob should not emit unexpected warnings |
1 |
0 |
310 |
[k8s.io] CronJob should not schedule jobs when suspended [Slow] |
1 |
0 |
354 |
[k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow] |
1 |
0 |
54 |
[k8s.io] CronJob should remove from active list jobs that have been deleted |
1 |
0 |
116 |
[k8s.io] CronJob should replace jobs when ReplaceConcurrent |
1 |
0 |
131 |
[k8s.io] CronJob should schedule multiple jobs concurrently |
1 |
0 |
57 |
[k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration |
1 |
0 |
24 |
[k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete |
1 |
0 |
23 |
[k8s.io] Daemon set [Serial] should retry creating failed daemon pods |
1 |
0 |
20 |
[k8s.io] Daemon set [Serial] should run and stop complex daemon |
1 |
0 |
25 |
[k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity |
1 |
0 |
32 |
[k8s.io] Daemon set [Serial] should run and stop simple daemon |
1 |
0 |
61 |
[k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate |
1 |
0 |
17 |
[k8s.io] Deployment deployment reaping should cascade to its replica sets and pods |
1 |
0 |
9 |
[k8s.io] Deployment deployment should create new pods |
1 |
0 |
15 |
[k8s.io] Deployment deployment should delete old replica sets |
1 |
0 |
21 |
[k8s.io] Deployment deployment should label adopted RSs and pods |
1 |
0 |
32 |
[k8s.io] Deployment deployment should support rollback |
1 |
0 |
34 |
[k8s.io] Deployment deployment should support rollback when there's replica set with no revision |
1 |
0 |
31 |
[k8s.io] Deployment deployment should support rollover |
1 |
0 |
52 |
[k8s.io] Deployment iterative rollouts should eventually progress |
1 |
0 |
26 |
[k8s.io] Deployment lack of progress should be reported in the deployment status |
1 |
0 |
9 |
[k8s.io] Deployment overlapping deployment should not fight with each other |
1 |
0 |
18 |
[k8s.io] Deployment paused deployment should be able to scale |
1 |
0 |
21 |
[k8s.io] Deployment paused deployment should be ignored by the controller |
1 |
0 |
10 |
[k8s.io] Deployment RecreateDeployment should delete old pods and create new ones |
1 |
0 |
21 |
[k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones |
1 |
0 |
49 |
[k8s.io] Deployment scaled rollout deployment should not block on annotation check |
1 |
0 |
16 |
[k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef |
1 |
0 |
24 |
[k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction |
1 |
0 |
31 |
[k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction |
1 |
0 |
7 |
[k8s.io] DisruptionController evictions: no PDB => should allow an eviction |
1 |
0 |
87 |
[k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction |
1 |
0 |
87 |
[k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction |
1 |
0 |
10 |
[k8s.io] DisruptionController should create a PodDisruptionBudget |
1 |
0 |
24 |
[k8s.io] DisruptionController should update PodDisruptionBudget status |
1 |
0 |
40 |
[k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial] |
1 |
0 |
34 |
[k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial] |
1 |
0 |
469 |
[k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed |
1 |
0 |
38 |
[k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios |
1 |
0 |
46 |
[k8s.io] DNS should provide DNS for ExternalName services |
1 |
0 |
22 |
[k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation |
1 |
0 |
83 |
[k8s.io] DNS should provide DNS for services [Conformance] |
1 |
0 |
24 |
[k8s.io] DNS should provide DNS for the cluster [Conformance] |
1 |
0 |
16 |
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] |
1 |
0 |
7 |
[k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] |
1 |
0 |
7 |
[k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] |
1 |
0 |
7 |
[k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] |
1 |
0 |
7 |
[k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] |
1 |
0 |
7 |
[k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance] |
1 |
0 |
7 |
[k8s.io] Downward API should provide pod and host IP as an env var [Conformance] |
1 |
0 |
7 |
[k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should provide podname only [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Downward API volume should set mode on item file [Conformance] [Volume] |
1 |
0 |
120 |
[k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume] |
1 |
0 |
32 |
[k8s.io] Downward API volume should update labels on modification [Conformance] [Volume] |
1 |
0 |
58 |
[k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should create and delete default persistent volumes [Slow] [Volume] |
1 |
0 |
23 |
[k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow] [Volume] |
1 |
0 |
311 |
[k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should not provision a volume in an unmanaged GCE zone. [Slow] [Volume] |
1 |
0 |
190 |
[k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should provision storage with different parameters [Slow] [Volume] |
1 |
0 |
38 |
[k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume] |
1 |
0 |
510 |
[k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume] |
1 |
0 |
160 |
[k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume] |
1 |
0 |
16 |
[k8s.io] EmptyDir wrapper volumes should not conflict [Volume] |
1 |
0 |
201 |
[k8s.io] ESIPP [Slow] should handle updates to source ip annotation |
1 |
0 |
226 |
[k8s.io] ESIPP [Slow] should only target nodes with endpoints |
1 |
0 |
161 |
[k8s.io] ESIPP [Slow] should work for type=LoadBalancer |
1 |
0 |
7 |
[k8s.io] ESIPP [Slow] should work for type=NodePort |
1 |
0 |
131 |
[k8s.io] ESIPP [Slow] should work from pods |
1 |
0 |
31 |
[k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] |
1 |
0 |
231 |
[k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service |
1 |
0 |
16 |
[k8s.io] Firewall rule should have correct firewall rules for e2e cluster |
1 |
0 |
22 |
[k8s.io] Garbage collector should delete pods created by rc when not orphaning |
1 |
0 |
8 |
[k8s.io] Garbage collector should delete RS created by deployment when not orphaning |
1 |
0 |
56 |
[k8s.io] Garbage collector should orphan pods created by rc if delete options say so |
1 |
0 |
46 |
[k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil |
1 |
0 |
17 |
[k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true |
1 |
0 |
69 |
[k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume] |
1 |
0 |
90 |
[k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume] |
1 |
0 |
80 |
[k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume] |
1 |
0 |
22 |
[k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod |
1 |
0 |
10 |
[k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs |
1 |
0 |
7 |
[k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] HostPath should support r/w [Volume] |
1 |
0 |
7 |
[k8s.io] HostPath should support subPath [Volume] |
1 |
0 |
32 |
[k8s.io] InitContainer should invoke init containers on a RestartAlways pod |
1 |
0 |
17 |
[k8s.io] InitContainer should invoke init containers on a RestartNever pod |
1 |
0 |
10 |
[k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod |
1 |
0 |
116 |
[k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod |
1 |
0 |
49 |
[k8s.io] Job should adopt matching orphans and release non-matching pods |
1 |
0 |
10 |
[k8s.io] Job should delete a job |
1 |
0 |
18 |
[k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted |
1 |
0 |
18 |
[k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted |
1 |
0 |
16 |
[k8s.io] Job should run a job to completion when tasks succeed |
1 |
0 |
11 |
[k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob |
1 |
0 |
6 |
[k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob |
1 |
0 |
62 |
[k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] |
1 |
0 |
10 |
[k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] |
1 |
0 |
23 |
[k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied |
1 |
0 |
26 |
[k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC |
1 |
0 |
11 |
[k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC |
1 |
0 |
11 |
[k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] |
1 |
0 |
11 |
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes |
1 |
0 |
18 |
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes |
1 |
0 |
10 |
[k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes |
1 |
0 |
23 |
[k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] |
1 |
0 |
32 |
[k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] |
1 |
0 |
9 |
[k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] |
1 |
0 |
17 |
[k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] |
1 |
0 |
23 |
[k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] |
1 |
0 |
31 |
[k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] |
1 |
0 |
28 |
[k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] |
1 |
0 |
9 |
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] |
1 |
0 |
16 |
[k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] |
1 |
0 |
33 |
[k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] |
1 |
0 |
23 |
[k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] |
1 |
0 |
24 |
[k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] |
1 |
0 |
23 |
[k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] |
1 |
0 |
9 |
[k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node |
1 |
0 |
8 |
[k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node |
1 |
0 |
10 |
[k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] |
1 |
0 |
10 |
[k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] |
1 |
0 |
10 |
[k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] |
1 |
0 |
32 |
[k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config |
1 |
0 |
50 |
[k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes |
1 |
0 |
25 |
[k8s.io] Kubectl client [k8s.io] Simple pod should support exec |
1 |
0 |
22 |
[k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy |
1 |
0 |
42 |
[k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach |
1 |
0 |
17 |
[k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward |
1 |
0 |
18 |
[k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] |
1 |
0 |
41 |
[k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] |
1 |
0 |
56 |
[k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] |
1 |
0 |
45 |
[k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. |
1 |
0 |
55 |
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] |
1 |
0 |
17 |
[k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive |
1 |
0 |
26 |
[k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. |
1 |
0 |
166 |
[k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec |
1 |
0 |
7 |
[k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. |
1 |
0 |
10 |
[k8s.io] MetricsGrabber should grab all metrics from a Kubelet. |
1 |
0 |
10 |
[k8s.io] MetricsGrabber should grab all metrics from a Scheduler. |
1 |
0 |
6 |
[k8s.io] MetricsGrabber should grab all metrics from API server. |
1 |
0 |
155 |
[k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) |
1 |
0 |
32 |
[k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. |
1 |
0 |
17 |
[k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted. |
1 |
0 |
16 |
[k8s.io] Network should set TCP CLOSE_WAIT timeout |
1 |
0 |
50 |
[k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] |
1 |
0 |
50 |
[k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] |
1 |
0 |
40 |
[k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] |
1 |
0 |
51 |
[k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] |
1 |
0 |
65 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http |
1 |
0 |
73 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp |
1 |
0 |
72 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http |
1 |
0 |
89 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp |
1 |
0 |
55 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http |
1 |
0 |
67 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp |
1 |
0 |
139 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http |
1 |
0 |
166 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp |
1 |
0 |
154 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] |
1 |
0 |
254 |
[k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] |
1 |
0 |
48 |
[k8s.io] Networking should check kube-proxy urls |
1 |
0 |
9 |
[k8s.io] Networking should provide Internet connection for containers [Conformance] |
1 |
0 |
11 |
[k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services |
1 |
0 |
151 |
[k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes |
1 |
0 |
151 |
[k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes |
1 |
0 |
78 |
[k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes |
1 |
0 |
161 |
[k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction |
1 |
0 |
40 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume] |
1 |
0 |
34 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access |
1 |
0 |
41 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access |
1 |
0 |
406 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access |
1 |
0 |
54 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access |
1 |
0 |
30 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access |
1 |
0 |
31 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access |
1 |
0 |
47 |
[k8s.io] PersistentVolumes [Volume] [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access |
1 |
0 |
94 |
[k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach |
1 |
0 |
85 |
[k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk |
1 |
0 |
87 |
[k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach |
1 |
0 |
10 |
[k8s.io] Pod Disks should be able to delete a non-existent PD without error |
1 |
0 |
183 |
[k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] [Volume] |
1 |
0 |
122 |
[k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] [Volume] |
1 |
0 |
63 |
[k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] [Volume] |
1 |
0 |
220 |
[k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] [Volume] |
1 |
0 |
102 |
[k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] [Volume] |
1 |
0 |
100 |
[k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Volume] |
1 |
0 |
22 |
[k8s.io] PodPreset should create a pod preset |
1 |
0 |
22 |
[k8s.io] PodPreset should not modify the pod on conflict |
1 |
0 |
25 |
[k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] |
1 |
0 |
17 |
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] |
1 |
0 |
17 |
[k8s.io] Pods should be submitted and removed [Conformance] |
1 |
0 |
23 |
[k8s.io] Pods should be updated [Conformance] |
1 |
0 |
1660 |
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow] |
1 |
0 |
31 |
[k8s.io] Pods should contain environment variables for services [Conformance] |
1 |
0 |
22 |
[k8s.io] Pods should get a host IP [Conformance] |
1 |
0 |
421 |
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow] |
1 |
0 |
43 |
[k8s.io] Pods should support remote command execution over websockets |
1 |
0 |
42 |
[k8s.io] Pods should support retrieving logs from the container over websockets |
1 |
0 |
34 |
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects |
1 |
0 |
35 |
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects |
1 |
0 |
37 |
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects |
1 |
0 |
33 |
[k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets |
1 |
0 |
36 |
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects |
1 |
0 |
34 |
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects |
1 |
0 |
37 |
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects |
1 |
0 |
31 |
[k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets |
1 |
0 |
49 |
[k8s.io] PreStop should call prestop when killing a pod [Conformance] |
1 |
0 |
53 |
[k8s.io] PrivilegedPod should enable privileged commands |
1 |
0 |
133 |
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance] |
1 |
0 |
133 |
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance] |
1 |
0 |
32 |
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] |
1 |
0 |
62 |
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] |
1 |
0 |
153 |
[k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] |
1 |
0 |
47 |
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] |
1 |
0 |
85 |
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] |
1 |
0 |
18 |
[k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume] |
1 |
0 |
8 |
[k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection] |
1 |
0 |
7 |
[k8s.io] Projected should provide container's cpu limit [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should provide container's cpu request [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should provide container's memory limit [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should provide container's memory request [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume] |
1 |
0 |
9 |
[k8s.io] Projected should provide podname only [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should set DefaultMode on files [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Projected should set mode on item file [Conformance] [Volume] |
1 |
0 |
32 |
[k8s.io] Projected should update annotations on modification [Conformance] [Volume] |
1 |
0 |
32 |
[k8s.io] Projected should update labels on modification [Conformance] [Volume] |
1 |
0 |
94 |
[k8s.io] Projected updates should be reflected in volume [Conformance] [Volume] |
1 |
0 |
10 |
[k8s.io] Proxy version v1 should proxy logs on node [Conformance] |
1 |
0 |
10 |
[k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] |
1 |
0 |
10 |
[k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] |
1 |
0 |
10 |
[k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] |
1 |
0 |
26 |
[k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] |
1 |
0 |
6 |
[k8s.io] Proxy version v1 should proxy to cadvisor |
1 |
0 |
10 |
[k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource |
1 |
0 |
23 |
[k8s.io] ReplicaSet should adopt matching pods on creation |
1 |
0 |
16 |
[k8s.io] ReplicaSet should release no longer matching pods |
1 |
0 |
20 |
[k8s.io] ReplicaSet should serve a basic image on each replica with a private image |
1 |
0 |
20 |
[k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] |
1 |
0 |
8 |
[k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota |
1 |
0 |
23 |
[k8s.io] ReplicationController should adopt matching pods on creation |
1 |
0 |
16 |
[k8s.io] ReplicationController should release no longer matching pods |
1 |
0 |
20 |
[k8s.io] ReplicationController should serve a basic image on each replica with a private image |
1 |
0 |
20 |
[k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] |
1 |
0 |
8 |
[k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota |
1 |
0 |
115 |
[k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available |
1 |
0 |
16 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. |
1 |
0 |
16 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume] |
1 |
0 |
16 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume] |
1 |
0 |
18 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. |
1 |
0 |
16 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. |
1 |
0 |
22 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. |
1 |
0 |
16 |
[k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. |
1 |
0 |
7 |
[k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. |
1 |
0 |
26 |
[k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. |
1 |
0 |
27 |
[k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. |
1 |
0 |
328 |
[k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] |
1 |
0 |
316 |
[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] |
1 |
0 |
346 |
[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected |
1 |
0 |
70 |
[k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid |
1 |
0 |
366 |
[k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work |
1 |
0 |
175 |
[k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching |
1 |
0 |
250 |
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching |
1 |
0 |
251 |
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities |
1 |
0 |
233 |
[k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 |
1 |
0 |
267 |
[k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching |
1 |
0 |
113 |
[k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] |
1 |
0 |
195 |
[k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] |
1 |
0 |
90 |
[k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching |
1 |
0 |
149 |
[k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching |
1 |
0 |
119 |
[k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation |
1 |
0 |
191 |
[k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity |
1 |
0 |
314 |
[k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms |
1 |
0 |
370 |
[k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity |
1 |
0 |
109 |
[k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate |
1 |
0 |
345 |
[k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node |
1 |
0 |
171 |
[k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node |
1 |
0 |
24 |
[k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume] |
1 |
0 |
17 |
[k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable from pods in env vars [Conformance] |
1 |
0 |
8 |
[k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume] |
1 |
0 |
7 |
[k8s.io] Secrets should be consumable via the environment [Conformance] |
1 |
0 |
56 |
[k8s.io] Service endpoints latency should not be very high [Conformance] |
1 |
0 |
26 |
[k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance] |
1 |
0 |
18 |
[k8s.io] ServiceAccounts should ensure a single API token exists |
1 |
0 |
17 |
[k8s.io] ServiceAccounts should mount an API token into pods [Conformance] |
1 |
0 |
461 |
[k8s.io] Services should be able to change the type and ports of a service [Slow] |
1 |
0 |
16 |
[k8s.io] Services should be able to create a functioning NodePort service |
1 |
0 |
62 |
[k8s.io] Services should be able to up and down services |
1 |
0 |
10 |
[k8s.io] Services should check NodePort out-of-range |
1 |
0 |
18 |
[k8s.io] Services should create endpoints for unready pods |
1 |
0 |
97 |
[k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] |
1 |
0 |
60 |
[k8s.io] Services should preserve source pod IP for traffic thru service cluster IP |
1 |
0 |
11 |
[k8s.io] Services should prevent NodePort collisions |
1 |
0 |
10 |
[k8s.io] Services should provide secure master service [Conformance] |
1 |
0 |
8 |
[k8s.io] Services should release NodePorts on delete |
1 |
0 |
18 |
[k8s.io] Services should serve a basic endpoint from pods [Conformance] |
1 |
0 |
33 |
[k8s.io] Services should serve multiport endpoints from pods [Conformance] |
1 |
0 |
10 |
[k8s.io] Services should use same NodePort with same port but different protocols |
1 |
0 |
16 |
[k8s.io] SSH should SSH to all nodes and run commands |
1 |
0 |
25 |
[k8s.io] Staging client repo client should create pods, delete pods, watch pods |
1 |
0 |
101 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed |
1 |
0 |
153 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy |
1 |
0 |
78 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods |
1 |
0 |
111 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates |
1 |
0 |
131 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails |
1 |
0 |
250 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity |
1 |
0 |
37 |
[k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset |
1 |
0 |
10 |
[k8s.io] Sysctls should reject invalid sysctls |
1 |
0 |
7 |
[k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] |
1 |
0 |
7 |
[k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] |
1 |
0 |
7 |
[k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] |
1 |
0 |
88 |
[k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable |
1 |
0 |
101 |
[k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable |
1 |
0 |
78 |
DumpClusterLogs |
1 |
0 |
17 |
Extract |
1 |
0 |
0 |
get kubeconfig |
1 |
0 |
0 |
IsUp |
1 |
0 |
0 |
kubectl version |