Kubernetes 24-Hour Test Report

ci-kubernetes-soak-gce-1-7-test

Passed Failed Avg Time (s) Test
2 1 3094 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
2 1 1505 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
2 1 31294 Test
6 0 53 [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume]
6 0 10 [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume]
6 0 9 [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
6 0 9 [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume]
3 0 613 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
3 0 616 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
3 0 442 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
3 0 518 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
3 0 960 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
3 0 1040 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
3 0 191 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
3 0 163 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
3 0 154 [k8s.io] Addon update should propagate add-on file changes [Slow]
3 0 6 [k8s.io] Cadvisor should be healthy on every node.
3 0 17 [k8s.io] Certificates API should support building a client with a CSR
3 0 154 [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
3 0 88 [k8s.io] Cluster level logging using GCL should ingest events
3 0 81 [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume]
3 0 8 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume]
3 0 9 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume]
3 0 9 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
3 0 8 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume]
3 0 8 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
3 0 8 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
3 0 9 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
3 0 8 [k8s.io] ConfigMap should be consumable via the environment [Conformance]
3 0 28 [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume]
3 0 71 [k8s.io] CronJob should adopt Jobs it owns that don't have ControllerRef yet
3 0 116 [k8s.io] CronJob should delete successful finished jobs with limit of one successful job
3 0 114 [k8s.io] CronJob should not emit unexpected warnings
3 0 306 [k8s.io] CronJob should not schedule jobs when suspended [Slow]
3 0 344 [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
3 0 50 [k8s.io] CronJob should remove from active list jobs that have been deleted
3 0 126 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
3 0 130 [k8s.io] CronJob should schedule multiple jobs concurrently
3 0 33 [k8s.io] Daemon set [Serial] Should adopt existing pods when creating a RollingUpdate DaemonSet regardless of templateGeneration
3 0 28 [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete
3 0 30 [k8s.io] Daemon set [Serial] should retry creating failed daemon pods
3 0 29 [k8s.io] Daemon set [Serial] Should rollback without unnecessary restarts
3 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon
3 0 22 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
3 0 29 [k8s.io] Daemon set [Serial] should run and stop simple daemon
3 0 63 [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate
3 0 11 [k8s.io] Deployment deployment can avoid hash collisions
3 0 13 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
3 0 11 [k8s.io] Deployment deployment should delete old replica sets
3 0 17 [k8s.io] Deployment deployment should label adopted RSs and pods
3 0 29 [k8s.io] Deployment deployment should support rollback
3 0 30 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
3 0 31 [k8s.io] Deployment deployment should support rollover
3 0 62 [k8s.io] Deployment iterative rollouts should eventually progress
3 0 22 [k8s.io] Deployment lack of progress should be reported in the deployment status
3 0 10 [k8s.io] Deployment overlapping deployment should not fight with each other
3 0 14 [k8s.io] Deployment paused deployment should be able to scale
3 0 16 [k8s.io] Deployment paused deployment should be ignored by the controller
3 0 11 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
3 0 17 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
3 0 50 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
3 0 28 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
3 0 32 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
3 0 86 [k8s.io] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
3 0 10 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
3 0 86 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
3 0 86 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
3 0 6 [k8s.io] DisruptionController should create a PodDisruptionBudget
3 0 28 [k8s.io] DisruptionController should update PodDisruptionBudget status
3 0 68 [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial]
3 0 45 [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial]
3 0 629 [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
3 0 39 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
3 0 44 [k8s.io] DNS should provide DNS for ExternalName services
3 0 20 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
3 0 21 [k8s.io] DNS should provide DNS for services [Conformance]
3 0 19 [k8s.io] DNS should provide DNS for the cluster [Conformance]
3 0 9 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
3 0 9 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
3 0 10 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
3 0 8 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
3 0 9 [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume]
3 0 9 [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume]
3 0 8 [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume]
3 0 8 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
3 0 8 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
3 0 9 [k8s.io] Downward API volume should provide podname only [Conformance] [Volume]
3 0 8 [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume]
3 0 8 [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume]
3 0 31 [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume]
3 0 31 [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume]
3 0 67 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should create and delete default persistent volumes [Slow] [Volume]
3 0 307 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should not provision a volume in an unmanaged GCE zone. [Slow] [Volume]
3 0 172 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should provision storage with different parameters [Slow] [Volume]
3 0 34 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume]
3 0 8 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume]
3 0 10 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume]
3 0 8 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume]
3 0 9 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume]
3 0 8 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume]
3 0 8 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume]
3 0 8 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume]
3 0 8 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume]
3 0 536 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume]
3 0 203 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume]
3 0 57 [k8s.io] EmptyDir wrapper volumes should not conflict [Volume]
3 0 208 [k8s.io] ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
3 0 276 [k8s.io] ESIPP [Slow] should only target nodes with endpoints
3 0 140 [k8s.io] ESIPP [Slow] should work for type=LoadBalancer
3 0 12 [k8s.io] ESIPP [Slow] should work for type=NodePort
3 0 116 [k8s.io] ESIPP [Slow] should work from pods
3 0 18 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
3 0 339 [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
3 0 12 [k8s.io] Firewall rule should have correct firewall rules for e2e cluster
3 0 17 [k8s.io] Garbage collector should delete pods created by rc when not orphaning
3 0 8 [k8s.io] Garbage collector should delete RS created by deployment when not orphaning
3 0 52 [k8s.io] Garbage collector should orphan pods created by rc if delete options say so
3 0 41 [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
3 0 13 [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true
3 0 20 [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
3 0 6 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
3 0 8 [k8s.io] HostPath should support existing directory subPath [Volume]
3 0 9 [k8s.io] HostPath should support existing single file subPath [Volume]
3 0 9 [k8s.io] HostPath should support r/w [Volume]
3 0 9 [k8s.io] HostPath should support subPath [Volume]
3 0 32 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
3 0 13 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
3 0 12 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
3 0 40 [k8s.io] Initializers should be invisible to controllers by default
3 0 22 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
3 0 12 [k8s.io] Job should run a job to completion when tasks succeed
3 0 7 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
3 0 7 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
3 0 98 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
3 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
3 0 27 [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied
3 0 24 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
3 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
3 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
3 0 24 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
3 0 25 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
3 0 26 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
3 0 17 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
3 0 26 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
3 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
3 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
3 0 14 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
3 0 91 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
3 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
3 0 17 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
3 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node
3 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
3 0 6 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
3 0 6 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
3 0 24 [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config
3 0 47 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
3 0 17 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
3 0 25 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
3 0 48 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
3 0 33 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
3 0 43 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
3 0 1207 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
3 0 56 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
3 0 13 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
3 0 23 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
3 0 134 [k8s.io] Loadbalancing: L7 [k8s.io] [Slow] Nginx should conform to Ingress spec
3 0 6 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
3 0 6 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
3 0 6 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
3 0 6 [k8s.io] MetricsGrabber should grab all metrics from API server.
3 0 6 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
3 0 82 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
3 0 39 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
3 0 19 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
3 0 12 [k8s.io] Network should set TCP CLOSE_WAIT timeout
3 0 48 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
3 0 48 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
3 0 48 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
3 0 49 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
3 0 71 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http
3 0 70 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
3 0 67 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
3 0 76 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
3 0 78 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http
3 0 150 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http
3 0 147 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp
3 0 158 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow]
3 0 247 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
3 0 54 [k8s.io] Networking should check kube-proxy urls
3 0 10 [k8s.io] Networking should provide Internet connection for containers [Conformance]
3 0 6 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
3 0 68 [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes
3 0 158 [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction
3 0 41 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume]
3 0 29 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
3 0 36 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
3 0 396 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access [Slow]
3 0 42 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
3 0 28 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
3 0 31 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
3 0 34 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
3 0 83 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
3 0 108 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
3 0 110 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
3 0 6 [k8s.io] Pod Disks should be able to delete a non-existent PD without error
3 0 159 [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] [Volume]
3 0 126 [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] [Volume]
3 0 56 [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] [Volume]
3 0 193 [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] [Volume]
3 0 122 [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] [Volume]
3 0 90 [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Volume]
3 0 26 [k8s.io] PodPreset should create a pod preset
3 0 26 [k8s.io] PodPreset should not modify the pod on conflict
3 0 24 [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance]
3 0 13 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
3 0 27 [k8s.io] Pods should be updated [Conformance]
3 0 1651 [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
3 0 28 [k8s.io] Pods should contain environment variables for services [Conformance]
3 0 26 [k8s.io] Pods should get a host IP [Conformance]
3 0 364 [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
3 0 47 [k8s.io] Pods should support remote command execution over websockets
3 0 51 [k8s.io] Pods should support retrieving logs from the container over websockets
3 0 33 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
3 0 34 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
3 0 34 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
3 0 32 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets
3 0 34 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
3 0 33 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
3 0 35 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
3 0 32 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets
3 0 49 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
3 0 47 [k8s.io] PrivilegedPod should enable privileged commands
3 0 129 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
3 0 129 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
3 0 28 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
3 0 58 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
3 0 156 [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow]
3 0 46 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
3 0 84 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
3 0 17 [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
3 0 8 [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume]
3 0 8 [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
3 0 8 [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
3 0 9 [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
3 0 8 [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
3 0 8 [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume]
3 0 8 [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume]
3 0 8 [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection]
3 0 8 [k8s.io] Projected should provide container's cpu request [Conformance] [Volume]
3 0 9 [k8s.io] Projected should provide container's memory request [Conformance] [Volume]
3 0 9 [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
3 0 8 [k8s.io] Projected should provide podname only [Conformance] [Volume]
3 0 8 [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume]
3 0 8 [k8s.io] Projected should set mode on item file [Conformance] [Volume]
3 0 31 [k8s.io] Projected should update labels on modification [Conformance] [Volume]
3 0 28 [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume]
3 0 6 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
3 0 6 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
3 0 6 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
3 0 6 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
3 0 27 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
3 0 6 [k8s.io] Proxy version v1 should proxy to cadvisor
3 0 27 [k8s.io] ReplicaSet should adopt matching pods on creation
3 0 12 [k8s.io] ReplicaSet should release no longer matching pods
3 0 16 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
3 0 17 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
3 0 9 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
3 0 12 [k8s.io] ReplicationController should release no longer matching pods
3 0 16 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
3 0 16 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
3 0 11 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
3 0 91 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
3 0 12 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
3 0 12 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume]
3 0 14 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
3 0 12 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
3 0 19 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
3 0 8 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
3 0 22 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
3 0 22 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
3 0 91 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
3 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
3 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
3 0 88 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
3 0 85 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
3 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
3 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
3 0 88 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
3 0 104 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
3 0 83 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
3 0 89 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
3 0 86 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
3 0 89 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
3 0 89 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
3 0 114 [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation
3 0 104 [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity
3 0 104 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms
3 0 104 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity
3 0 106 [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate
3 0 53 [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume]
3 0 14 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
3 0 9 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
3 0 9 [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume]
3 0 8 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
3 0 9 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
3 0 8 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume]
3 0 8 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
3 0 9 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume]
3 0 8 [k8s.io] Secrets should be consumable via the environment [Conformance]
3 0 6 [k8s.io] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata
3 0 6 [k8s.io] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
3 0 24 [k8s.io] Servers with support for Table transformation should return pod details
3 0 39 [k8s.io] Service endpoints latency should not be very high [Conformance]
3 0 25 [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance]
3 0 14 [k8s.io] ServiceAccounts should ensure a single API token exists
3 0 19 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
3 0 445 [k8s.io] Services should be able to change the type and ports of a service [Slow]
3 0 18 [k8s.io] Services should be able to create a functioning NodePort service
3 0 60 [k8s.io] Services should be able to up and down services
3 0 150 [k8s.io] Services should create endpoints for unready pods
3 0 85 [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
3 0 77 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
3 0 6 [k8s.io] Services should prevent NodePort collisions
3 0 6 [k8s.io] Services should provide secure master service [Conformance]
3 0 9 [k8s.io] Services should release NodePorts on delete
3 0 31 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
3 0 19 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
3 0 6 [k8s.io] Services should use same NodePort with same port but different protocols
3 0 12 [k8s.io] SSH should SSH to all nodes and run commands
3 0 24 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
3 0 115 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods
3 0 71 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
3 0 100 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
3 0 191 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
3 0 121 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications
3 0 233 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications
3 0 244 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
3 0 24 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset
3 0 6 [k8s.io] Sysctls should reject invalid sysctls
3 0 8 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
3 0 8 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
3 0 42 [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable
3 0 97 [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable
3 0 23 CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
3 0 42 Extract
3 0 0 get kubeconfig
3 0 0 IsUp
3 0 0 kubectl version
2 0 8 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume]
2 0 12 [k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
2 0 31 [k8s.io] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
2 0 8 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
2 0 8 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
2 0 8 [k8s.io] Downward API should provide pod and host IP as an env var [Conformance]
2 0 10 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
2 0 8 [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume]
2 0 39 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow] [Volume]
2 0 8 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume]
2 0 8 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume]
2 0 9 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume]
2 0 8 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume]
2 0 8 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume]
2 0 8 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume]
2 0 9 [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume]
2 0 121 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
2 0 52 [k8s.io] Job should adopt matching orphans and release non-matching pods
2 0 11 [k8s.io] Job should delete a job
2 0 14 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
2 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
2 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
2 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
2 0 31 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
2 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
2 0 9 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node
2 0 18 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
2 0 40 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
2 0 71 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
2 0 150 [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes
2 0 144 [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes
2 0 13 [k8s.io] Pods should be submitted and removed [Conformance]
2 0 8 [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume]
2 0 8 [k8s.io] Projected should provide container's memory limit [Conformance] [Volume]
2 0 8 [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
2 0 31 [k8s.io] Projected should update annotations on modification [Conformance] [Volume]
2 0 6 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
2 0 28 [k8s.io] ReplicationController should adopt matching pods on creation
2 0 12 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume]
2 0 12 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
2 0 259 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
2 0 227 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
2 0 6 [k8s.io] Services should check NodePort out-of-range
2 0 144 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy
2 0 8 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
1 0 165 DumpClusterLogs