Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gce-gci-ci-master

Passed Failed Avg Time (s) Test
21 2 752 Test
22 1 29 [k8s.io] Pods should be submitted and removed [Conformance]
22 1 43 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets
22 1 46 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
46 0 102 [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume]
46 0 19 [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume]
46 0 19 [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
46 0 20 [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume]
23 0 210 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
23 0 173 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
23 0 84 [k8s.io] AppArmor load AppArmor profiles should enforce an AppArmor profile
23 0 10 [k8s.io] Cadvisor should be healthy on every node.
23 0 20 [k8s.io] Certificates API should support building a client with a CSR
23 0 50 [k8s.io] Cluster level logging implemented by Stackdriver should ingest events
23 0 84 [k8s.io] Cluster level logging implemented by Stackdriver should ingest logs from applications
23 0 96 [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume]
23 0 20 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume]
23 0 22 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume]
23 0 24 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
23 0 25 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume]
23 0 23 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
23 0 21 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
23 0 20 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume]
23 0 24 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
23 0 19 [k8s.io] ConfigMap should be consumable via the environment [Conformance]
23 0 95 [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume]
23 0 113 [k8s.io] CronJob should delete successful finished jobs with limit of one successful job
23 0 116 [k8s.io] CronJob should not emit unexpected warnings
23 0 57 [k8s.io] CronJob should remove from active list jobs that have been deleted
23 0 126 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
23 0 131 [k8s.io] CronJob should schedule multiple jobs concurrently
23 0 14 [k8s.io] Deployment deployment can avoid hash collisions
23 0 23 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
23 0 25 [k8s.io] Deployment deployment should delete old replica sets
23 0 25 [k8s.io] Deployment deployment should label adopted RSs and pods
23 0 53 [k8s.io] Deployment deployment should support rollback
23 0 62 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
23 0 47 [k8s.io] Deployment deployment should support rollover
23 0 75 [k8s.io] Deployment iterative rollouts should eventually progress
23 0 33 [k8s.io] Deployment lack of progress should be reported in the deployment status
23 0 14 [k8s.io] Deployment overlapping deployment should not fight with each other
23 0 18 [k8s.io] Deployment paused deployment should be able to scale
23 0 31 [k8s.io] Deployment paused deployment should be ignored by the controller
23 0 21 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
23 0 41 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
23 0 118 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
23 0 27 [k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
23 0 46 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
23 0 54 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
23 0 53 [k8s.io] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
23 0 112 [k8s.io] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
23 0 20 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
23 0 101 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
23 0 104 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
23 0 12 [k8s.io] DisruptionController should create a PodDisruptionBudget
23 0 45 [k8s.io] DisruptionController should update PodDisruptionBudget status
23 0 44 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
23 0 86 [k8s.io] DNS should provide DNS for ExternalName services
23 0 37 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
23 0 39 [k8s.io] DNS should provide DNS for services [Conformance]
23 0 43 [k8s.io] DNS should provide DNS for the cluster [Conformance]
23 0 26 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
23 0 20 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
23 0 20 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
23 0 17 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
23 0 17 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
23 0 20 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
23 0 20 [k8s.io] Downward API should provide pod and host IP as an env var [Conformance]
23 0 19 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
23 0 20 [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume]
23 0 18 [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume]
23 0 19 [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume]
23 0 18 [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume]
23 0 21 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
23 0 21 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
23 0 21 [k8s.io] Downward API volume should provide podname only [Conformance] [Volume]
23 0 21 [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume]
23 0 21 [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume]
23 0 78 [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume]
23 0 82 [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume]
23 0 40 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume]
23 0 18 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume]
23 0 20 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume]
23 0 24 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume]
23 0 20 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume]
23 0 19 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume]
23 0 19 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume]
23 0 19 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume]
23 0 20 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume]
23 0 21 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume]
23 0 21 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume]
23 0 21 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume]
23 0 20 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume]
23 0 21 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume]
23 0 20 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume]
23 0 136 [k8s.io] EmptyDir wrapper volumes should not conflict [Volume]
23 0 31 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
23 0 16 [k8s.io] Firewall rule should have correct firewall rules for e2e cluster
23 0 22 [k8s.io] Garbage collector should delete pods created by rc when not orphaning
23 0 10 [k8s.io] Garbage collector should delete RS created by deployment when not orphaning
23 0 65 [k8s.io] Garbage collector should orphan pods created by rc if delete options say so
23 0 44 [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
23 0 15 [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true
23 0 96 [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable [Volume]
23 0 76 [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume]
23 0 81 [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume]
23 0 34 [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
23 0 8 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
23 0 24 [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume]
23 0 19 [k8s.io] HostPath should support existing directory subPath [Volume]
23 0 22 [k8s.io] HostPath should support existing single file subPath [Volume]
23 0 17 [k8s.io] HostPath should support r/w [Volume]
23 0 20 [k8s.io] HostPath should support subPath [Volume]
23 0 57 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
23 0 34 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
23 0 34 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
23 0 140 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
23 0 57 [k8s.io] Initializers should be invisible to controllers by default
23 0 49 [k8s.io] Job should adopt matching orphans and release non-matching pods
23 0 24 [k8s.io] Job should delete a job
23 0 42 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
23 0 57 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
23 0 34 [k8s.io] Job should run a job to completion when tasks succeed
23 0 10 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
23 0 11 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
23 0 132 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
23 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
23 0 35 [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied
23 0 31 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
23 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
23 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
23 0 9 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
23 0 9 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
23 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
23 0 48 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
23 0 46 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
23 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
23 0 32 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
23 0 40 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
23 0 36 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
23 0 50 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
23 0 30 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
23 0 28 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
23 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
23 0 27 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
23 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
23 0 34 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
23 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
23 0 10 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
23 0 10 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
23 0 41 [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config
23 0 108 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
23 0 31 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
23 0 30 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
23 0 77 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
23 0 34 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
23 0 35 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
23 0 80 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
23 0 66 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
23 0 73 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
23 0 60 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
23 0 16 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
23 0 32 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
23 0 9 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
23 0 9 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
23 0 10 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
23 0 9 [k8s.io] MetricsGrabber should grab all metrics from API server.
23 0 16 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
23 0 34 [k8s.io] Network should set TCP CLOSE_WAIT timeout
23 0 66 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
23 0 71 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
23 0 73 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
23 0 77 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
23 0 73 [k8s.io] Networking should check kube-proxy urls
23 0 22 [k8s.io] Networking should provide Internet connection for containers [Conformance]
23 0 9 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
23 0 77 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume]
23 0 60 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
23 0 85 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
23 0 68 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
23 0 57 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
23 0 58 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
23 0 68 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
23 0 99 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
23 0 95 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
23 0 102 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
23 0 9 [k8s.io] Pod Disks should be able to delete a non-existent PD without error
23 0 45 [k8s.io] PodPreset should create a pod preset
23 0 38 [k8s.io] PodPreset should not modify the pod on conflict
23 0 28 [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance]
23 0 29 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
23 0 39 [k8s.io] Pods should be updated [Conformance]
23 0 48 [k8s.io] Pods should contain environment variables for services [Conformance]
23 0 45 [k8s.io] Pods should get a host IP [Conformance]
23 0 39 [k8s.io] Pods should support remote command execution over websockets
23 0 35 [k8s.io] Pods should support retrieving logs from the container over websockets
23 0 59 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
23 0 51 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
23 0 50 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
23 0 61 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
23 0 51 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
23 0 54 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
23 0 47 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets
23 0 45 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
23 0 37 [k8s.io] PrivilegedPod should enable privileged commands
23 0 142 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
23 0 141 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
23 0 42 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
23 0 40 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
23 0 57 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
23 0 92 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
23 0 30 [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
23 0 20 [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume]
23 0 19 [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
23 0 21 [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
23 0 22 [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
23 0 21 [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
23 0 19 [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume]
23 0 19 [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume]
23 0 19 [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection]
23 0 19 [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume]
23 0 18 [k8s.io] Projected should provide container's cpu request [Conformance] [Volume]
23 0 20 [k8s.io] Projected should provide container's memory limit [Conformance] [Volume]
23 0 17 [k8s.io] Projected should provide container's memory request [Conformance] [Volume]
23 0 21 [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
23 0 20 [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
23 0 22 [k8s.io] Projected should provide podname only [Conformance] [Volume]
23 0 20 [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume]
23 0 18 [k8s.io] Projected should set mode on item file [Conformance] [Volume]
23 0 91 [k8s.io] Projected should update annotations on modification [Conformance] [Volume]
23 0 81 [k8s.io] Projected should update labels on modification [Conformance] [Volume]
23 0 102 [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume]
23 0 10 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
23 0 9 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
23 0 12 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
23 0 11 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
23 0 33 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
23 0 9 [k8s.io] Proxy version v1 should proxy to cadvisor
23 0 10 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
23 0 38 [k8s.io] ReplicaSet should adopt matching pods on creation
23 0 17 [k8s.io] ReplicaSet should release no longer matching pods
23 0 27 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
23 0 28 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
23 0 15 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
23 0 46 [k8s.io] ReplicationController should adopt matching pods on creation
23 0 15 [k8s.io] ReplicationController should release no longer matching pods
23 0 27 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
23 0 24 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
23 0 13 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
23 0 15 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
23 0 16 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume]
23 0 15 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume]
23 0 18 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
23 0 15 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
23 0 23 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
23 0 17 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
23 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
23 0 25 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
23 0 25 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
23 0 97 [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume]
23 0 31 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
23 0 19 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
23 0 18 [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume]
23 0 20 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
23 0 25 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
23 0 21 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume]
23 0 19 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
23 0 21 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume]
23 0 24 [k8s.io] Secrets should be consumable via the environment [Conformance]
23 0 10 [k8s.io] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata
23 0 11 [k8s.io] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
23 0 29 [k8s.io] Servers with support for Table transformation should return pod details
23 0 51 [k8s.io] Service endpoints latency should not be very high [Conformance]
23 0 31 [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance]
23 0 16 [k8s.io] ServiceAccounts should ensure a single API token exists
23 0 44 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
23 0 9 [k8s.io] Services should be able to change the type from ClusterIP to ExternalName
23 0 9 [k8s.io] Services should be able to change the type from ExternalName to ClusterIP
23 0 10 [k8s.io] Services should be able to change the type from ExternalName to NodePort
23 0 10 [k8s.io] Services should be able to change the type from NodePort to ExternalName
23 0 36 [k8s.io] Services should be able to create a functioning NodePort service
23 0 147 [k8s.io] Services should be able to up and down services
23 0 9 [k8s.io] Services should check NodePort out-of-range
23 0 195 [k8s.io] Services should create endpoints for unready pods
23 0 76 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
23 0 10 [k8s.io] Services should prevent NodePort collisions
23 0 9 [k8s.io] Services should provide secure master service [Conformance]
23 0 20 [k8s.io] Services should release NodePorts on delete
23 0 41 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
23 0 10 [k8s.io] Services should use same NodePort with same port but different protocols
23 0 15 [k8s.io] SSH should SSH to all nodes and run commands
23 0 30 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
23 0 151 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods
23 0 207 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy
23 0 88 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
23 0 180 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
23 0 182 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
23 0 187 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications
23 0 310 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications
23 0 297 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
23 0 52 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset
23 0 14 [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node
23 0 9 [k8s.io] Sysctls should reject invalid sysctls
23 0 21 [k8s.io] Sysctls should support sysctls
23 0 17 [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted
23 0 19 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
23 0 21 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
23 0 23 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
23 0 31 [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable
23 0 84 [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable
23 0 36 CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
23 0 0 Deferred TearDown
23 0 0 DiffResources
23 0 17 Extract
23 0 0 get kubeconfig
23 0 0 IsUp
23 0 0 kubectl version
23 0 0 list nodes
23 0 8 ListResources After
23 0 7 ListResources Before
23 0 8 ListResources Down
23 0 9 ListResources Up
23 0 335 TearDown
23 0 19 TearDown Previous
23 0 360 Up
2 0 69 DumpClusterLogs