Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gci-gke-multizone

Passed Failed Avg Time (s) Test
0 8 621 [k8s.io] Cluster level logging implemented by Stackdriver should ingest events
0 8 327 [k8s.io] EmptyDir wrapper volumes should not conflict [Volume]
0 8 777 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
0 8 428 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
0 8 420 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
0 8 146 [k8s.io] Network should set TCP CLOSE_WAIT timeout
0 8 398 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
0 8 403 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
0 8 348 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume]
0 8 341 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
0 8 338 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
0 8 343 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
0 8 270 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
0 8 1586 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
0 8 271 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
0 8 352 [k8s.io] Services should be able to up and down services
0 8 1390 [k8s.io] Services should create endpoints for unready pods
0 8 209 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
0 8 8608 Test
8 7 964 Up
1 7 326 [k8s.io] DNS should provide DNS for the cluster [Conformance]
1 7 392 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
1 7 254 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
1 7 307 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
1 7 304 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
1 7 254 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
1 7 334 [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable
0 7 955 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
0 7 970 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
2 6 290 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
2 6 265 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
2 6 249 [k8s.io] Services should be able to create a functioning NodePort service
3 5 328 [k8s.io] DNS should provide DNS for services [Conformance]
3 5 81 [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config
3 5 190 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
4 4 175 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
0 2 0 AfterSuite
7 1 86 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
7 1 118 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
0 1 848 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
16 0 45 [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume]
16 0 10 [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume]
16 0 11 [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
16 0 9 [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume]
15 0 70 Deferred TearDown
15 0 17 Extract
15 0 7 ListResources Before
15 0 5 TearDown Previous
8 0 9 [k8s.io] Cadvisor should be healthy on every node.
8 0 17 [k8s.io] Certificates API should support building a client with a CSR
8 0 97 [k8s.io] Cluster level logging implemented by Stackdriver should ingest logs from applications
8 0 68 [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume]
8 0 10 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume]
8 0 9 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume]
8 0 9 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
8 0 9 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume]
8 0 10 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
8 0 12 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
8 0 9 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume]
8 0 15 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
8 0 9 [k8s.io] ConfigMap should be consumable via the environment [Conformance]
8 0 60 [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume]
8 0 13 [k8s.io] Deployment deployment can avoid hash collisions
8 0 15 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
8 0 18 [k8s.io] Deployment deployment should delete old replica sets
8 0 16 [k8s.io] Deployment deployment should label adopted RSs and pods
8 0 31 [k8s.io] Deployment deployment should support rollback
8 0 35 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
8 0 33 [k8s.io] Deployment deployment should support rollover
8 0 40 [k8s.io] Deployment iterative rollouts should eventually progress
8 0 30 [k8s.io] Deployment lack of progress should be reported in the deployment status
8 0 11 [k8s.io] Deployment overlapping deployment should not fight with each other
8 0 16 [k8s.io] Deployment paused deployment should be able to scale
8 0 25 [k8s.io] Deployment paused deployment should be ignored by the controller
8 0 12 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
8 0 20 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
8 0 42 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
8 0 15 [k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
8 0 27 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
8 0 29 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
8 0 41 [k8s.io] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
8 0 95 [k8s.io] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
8 0 10 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
8 0 90 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
8 0 111 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
8 0 8 [k8s.io] DisruptionController should create a PodDisruptionBudget
8 0 37 [k8s.io] DisruptionController should update PodDisruptionBudget status
8 0 45 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
8 0 10 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
8 0 9 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
8 0 12 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
8 0 10 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
8 0 11 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
8 0 10 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
8 0 11 [k8s.io] Downward API should provide pod and host IP as an env var [Conformance]
8 0 11 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
8 0 12 [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume]
8 0 9 [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume]
8 0 10 [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume]
8 0 10 [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume]
8 0 10 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
8 0 11 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
8 0 12 [k8s.io] Downward API volume should provide podname only [Conformance] [Volume]
8 0 9 [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume]
8 0 12 [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume]
8 0 87 [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume]
8 0 60 [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume]
8 0 35 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume]
8 0 10 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume]
8 0 10 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume]
8 0 10 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume]
8 0 9 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume]
8 0 10 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume]
8 0 9 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume]
8 0 11 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume]
8 0 20 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume]
8 0 10 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume]
8 0 32 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
8 0 18 [k8s.io] Garbage collector should delete pods created by rc when not orphaning
8 0 8 [k8s.io] Garbage collector should delete RS created by deployment when not orphaning
8 0 53 [k8s.io] Garbage collector should orphan pods created by rc if delete options say so
8 0 43 [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
8 0 13 [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true
8 0 16 [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
8 0 7 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
8 0 9 [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume]
8 0 11 [k8s.io] HostPath should support existing directory subPath [Volume]
8 0 10 [k8s.io] HostPath should support existing single file subPath [Volume]
8 0 10 [k8s.io] HostPath should support r/w [Volume]
8 0 11 [k8s.io] HostPath should support subPath [Volume]
8 0 33 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
8 0 16 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
8 0 12 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
8 0 121 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
8 0 42 [k8s.io] Initializers should be invisible to controllers by default
8 0 57 [k8s.io] Job should adopt matching orphans and release non-matching pods
8 0 11 [k8s.io] Job should delete a job
8 0 15 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
8 0 26 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
8 0 16 [k8s.io] Job should run a job to completion when tasks succeed
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
8 0 39 [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied
8 0 24 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
8 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
8 0 28 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
8 0 35 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
8 0 21 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
8 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
8 0 27 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
8 0 23 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
8 0 30 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
8 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
8 0 18 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
8 0 20 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
8 0 33 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
8 0 14 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
8 0 20 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
8 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
8 0 8 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
8 0 7 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
8 0 50 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
8 0 23 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
8 0 20 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
8 0 42 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
8 0 18 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
8 0 50 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
8 0 53 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
8 0 24 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
8 0 7 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
8 0 7 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
8 0 7 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
8 0 7 [k8s.io] MetricsGrabber should grab all metrics from API server.
8 0 21 [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones
8 0 24 [k8s.io] Multi-AZ Clusters should spread the pods of a service across zones
8 0 50 [k8s.io] Networking should check kube-proxy urls
8 0 14 [k8s.io] Networking should provide Internet connection for containers [Conformance]
8 0 7 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
8 0 113 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
8 0 101 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
8 0 121 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
8 0 25 [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance]
8 0 13 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
8 0 18 [k8s.io] Pods should be submitted and removed [Conformance]
8 0 30 [k8s.io] Pods should be updated [Conformance]
8 0 39 [k8s.io] Pods should contain environment variables for services [Conformance]
8 0 27 [k8s.io] Pods should get a host IP [Conformance]
8 0 48 [k8s.io] Pods should support remote command execution over websockets
8 0 46 [k8s.io] Pods should support retrieving logs from the container over websockets
8 0 35 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
8 0 38 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
8 0 36 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
8 0 30 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets
8 0 87 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
8 0 86 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets
8 0 48 [k8s.io] PrivilegedPod should enable privileged commands
8 0 131 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
8 0 130 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
8 0 30 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
8 0 60 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
8 0 43 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
8 0 84 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
8 0 17 [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
8 0 10 [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume]
8 0 16 [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
8 0 10 [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
8 0 17 [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
8 0 10 [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
8 0 10 [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume]
8 0 10 [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume]
8 0 11 [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection]
8 0 10 [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume]
8 0 10 [k8s.io] Projected should provide container's cpu request [Conformance] [Volume]
8 0 10 [k8s.io] Projected should provide container's memory limit [Conformance] [Volume]
8 0 12 [k8s.io] Projected should provide container's memory request [Conformance] [Volume]
8 0 11 [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
8 0 12 [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
8 0 11 [k8s.io] Projected should provide podname only [Conformance] [Volume]
8 0 10 [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume]
8 0 10 [k8s.io] Projected should set mode on item file [Conformance] [Volume]
8 0 68 [k8s.io] Projected should update annotations on modification [Conformance] [Volume]
8 0 66 [k8s.io] Projected should update labels on modification [Conformance] [Volume]
8 0 85 [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume]
8 0 8 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
8 0 7 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
8 0 7 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
8 0 7 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
8 0 8 [k8s.io] Proxy version v1 should proxy to cadvisor
8 0 7 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
8 0 29 [k8s.io] ReplicaSet should adopt matching pods on creation
8 0 9 [k8s.io] ReplicaSet should release no longer matching pods
8 0 10 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
8 0 32 [k8s.io] ReplicationController should adopt matching pods on creation
8 0 10 [k8s.io] ReplicationController should release no longer matching pods
8 0 10 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
8 0 13 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
8 0 14 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume]
8 0 14 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume]
8 0 15 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
8 0 14 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
8 0 19 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
8 0 14 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
8 0 10 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
8 0 24 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
8 0 24 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
8 0 69 [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume]
8 0 25 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
8 0 9 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
8 0 12 [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume]
8 0 9 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
8 0 9 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
8 0 11 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume]
8 0 10 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
8 0 10 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume]
8 0 9 [k8s.io] Secrets should be consumable via the environment [Conformance]
8 0 7 [k8s.io] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata
8 0 7 [k8s.io] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
8 0 25 [k8s.io] Servers with support for Table transformation should return pod details
8 0 29 [k8s.io] Service endpoints latency should not be very high [Conformance]
8 0 18 [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance]
8 0 15 [k8s.io] ServiceAccounts should ensure a single API token exists
8 0 17 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
8 0 7 [k8s.io] Services should be able to change the type from ClusterIP to ExternalName
8 0 7 [k8s.io] Services should be able to change the type from ExternalName to ClusterIP
8 0 7 [k8s.io] Services should be able to change the type from ExternalName to NodePort
8 0 7 [k8s.io] Services should be able to change the type from NodePort to ExternalName
8 0 8 [k8s.io] Services should check NodePort out-of-range
8 0 7 [k8s.io] Services should prevent NodePort collisions
8 0 7 [k8s.io] Services should provide secure master service [Conformance]
8 0 12 [k8s.io] Services should release NodePorts on delete
8 0 26 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
8 0 20 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
8 0 7 [k8s.io] Services should use same NodePort with same port but different protocols
8 0 13 [k8s.io] SSH should SSH to all nodes and run commands
8 0 24 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
8 0 116 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods
8 0 164 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy
8 0 63 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
8 0 123 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
8 0 128 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
8 0 138 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications
8 0 246 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications
8 0 211 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
8 0 30 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset
8 0 7 [k8s.io] Sysctls should reject invalid sysctls
8 0 9 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
8 0 12 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
8 0 12 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
8 0 47 [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable
8 0 12 CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
8 0 0 DiffResources
8 0 29 DumpClusterLogs
8 0 0 get kubeconfig
8 0 0 IsUp
8 0 0 kubectl version
8 0 0 list nodes
8 0 9 ListResources After
8 0 9 ListResources Down
8 0 7 ListResources Up
8 0 196 TearDown
7 0 31 DumpClusterLogs (--up failed)