Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gke-gci-1-6-cvm-master-upgrade-cluster-new

Passed Failed Avg Time (s) Test
0 2 61 [k8s.io] Projected optional updates should be reflected in volume [Conformance] [Volume]
0 2 61 [k8s.io] Projected should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
1 1 49 [k8s.io] Projected should be consumable from pods in volume [Conformance] [Volume]
1 1 20 [k8s.io] Projected should be consumable from pods in volume with mappings [Conformance] [Volume]
0 1 696 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
0 1 30 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
0 1 30 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
0 1 92 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
0 1 92 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
0 1 92 [k8s.io] AppArmor load AppArmor profiles should enforce an AppArmor profile
0 1 30 [k8s.io] Certificates API should support building a client with a CSR
0 1 30 [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
0 1 30 [k8s.io] ConfigMap optional updates should be reflected in volume [Conformance] [Volume]
0 1 30 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] [Volume]
0 1 92 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] [Volume]
0 1 92 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
0 1 92 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] [Volume]
0 1 30 [k8s.io] ConfigMap should be consumable via the environment [Conformance]
0 1 30 [k8s.io] ConfigMap updates should be reflected in volume [Conformance] [Volume]
0 1 92 [k8s.io] CronJob should adopt Jobs it owns that don't have ControllerRef yet
0 1 92 [k8s.io] CronJob should delete successful finished jobs with limit of one successful job
0 1 92 [k8s.io] CronJob should not emit unexpected warnings
0 1 92 [k8s.io] CronJob should not schedule jobs when suspended [Slow]
0 1 30 [k8s.io] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
0 1 92 [k8s.io] CronJob should remove from active list jobs that have been deleted
0 1 30 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
0 1 30 [k8s.io] CronJob should schedule multiple jobs concurrently
0 1 92 [k8s.io] Daemon set [Serial] Should adopt or recreate existing pods when creating a RollingUpdate DaemonSet with matching or mismatching templateGeneration
0 1 92 [k8s.io] Daemon set [Serial] Should not update pod when spec was updated and update strategy is OnDelete
0 1 92 [k8s.io] Daemon set [Serial] should retry creating failed daemon pods
0 1 92 [k8s.io] Daemon set [Serial] should run and stop simple daemon
0 1 30 [k8s.io] Daemon set [Serial] Should update pod when spec was updated and update strategy is RollingUpdate
0 1 92 [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
0 1 30 [k8s.io] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
0 1 30 [k8s.io] Deployment deployment should delete old replica sets
0 1 30 [k8s.io] Deployment deployment should support rollback
0 1 30 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
0 1 30 [k8s.io] Deployment deployment should support rollover
0 1 92 [k8s.io] Deployment lack of progress should be reported in the deployment status
0 1 30 [k8s.io] Deployment overlapping deployment should not fight with each other
0 1 92 [k8s.io] Deployment paused deployment should be able to scale
0 1 30 [k8s.io] Deployment paused deployment should be ignored by the controller
0 1 30 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
0 1 30 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
0 1 92 [k8s.io] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
0 1 30 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
0 1 92 [k8s.io] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
0 1 30 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
0 1 92 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
0 1 30 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
0 1 92 [k8s.io] DisruptionController should create a PodDisruptionBudget
0 1 92 [k8s.io] DisruptionController should update PodDisruptionBudget status
0 1 30 [k8s.io] DNS configMap nameserver should be able to change stubDomain configuration [Slow][Serial]
0 1 92 [k8s.io] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
0 1 30 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
0 1 92 [k8s.io] DNS should provide DNS for ExternalName services
0 1 30 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
0 1 30 [k8s.io] DNS should provide DNS for services [Conformance]
0 1 30 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
0 1 92 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
0 1 30 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
0 1 30 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
0 1 30 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
0 1 92 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
0 1 92 [k8s.io] Downward API should provide pod and host IP as an env var [Conformance]
0 1 92 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
0 1 30 [k8s.io] Downward API volume should provide container's memory limit [Conformance] [Volume]
0 1 92 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
0 1 30 [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume]
0 1 92 [k8s.io] Downward API volume should update labels on modification [Conformance] [Volume]
0 1 30 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by removing the default annotation[Slow] [Serial] [Disruptive] [Volume]
0 1 92 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should create and delete default persistent volumes [Slow] [Volume]
0 1 92 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should not provision a volume in an unmanaged GCE zone. [Slow] [Volume]
0 1 92 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should provision storage with different parameters [Slow] [Volume]
0 1 92 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner should test that deleting a claim before the volume is provisioned deletes the volume. [Volume]
0 1 92 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] [Volume]
0 1 30 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] [Volume]
0 1 92 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] [Volume]
0 1 30 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] [Volume]
0 1 30 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] [Volume]
0 1 92 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] [Volume]
0 1 92 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance] [Volume]
0 1 30 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] [Volume]
0 1 92 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Volume]
0 1 30 [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] [Volume]
0 1 30 [k8s.io] ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
0 1 30 [k8s.io] ESIPP [Slow] should only target nodes with endpoints
0 1 92 [k8s.io] ESIPP [Slow] should work for type=LoadBalancer
0 1 92 [k8s.io] ESIPP [Slow] should work for type=NodePort
0 1 92 [k8s.io] ESIPP [Slow] should work from pods
0 1 92 [k8s.io] Etcd failure [Disruptive] should recover from network partition with master
0 1 30 [k8s.io] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
0 1 30 [k8s.io] Firewall rule should have correct firewall rules for e2e cluster
0 1 30 [k8s.io] Garbage collector should delete RS created by deployment when not orphaning
0 1 30 [k8s.io] Garbage collector should orphan pods created by rc if delete options say so
0 1 92 [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume]
0 1 30 [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 [Volume]
0 1 92 [k8s.io] Generated release_1_5 clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
0 1 30 [k8s.io] HostPath should give a volume the correct mode [Conformance] [Volume]
0 1 92 [k8s.io] HostPath should support existing directory subPath [Volume]
0 1 30 [k8s.io] HostPath should support r/w [Volume]
0 1 30 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
0 1 30 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
0 1 30 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
0 1 30 [k8s.io] Initializers should be invisible to controllers by default
0 1 30 [k8s.io] Initializers should dynamically register and apply initializers to pods [Serial]
0 1 92 [k8s.io] Job should adopt matching orphans and release non-matching pods
0 1 92 [k8s.io] Job should run a job to completion when tasks succeed
0 1 30 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
0 1 30 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
0 1 30 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl apply apply set/view last-applied
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should remove all the taints with the same key off a node
0 1 30 [k8s.io] Kubectl client [k8s.io] Kubectl taint [Serial] should update the taint on a node
0 1 92 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
0 1 92 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
0 1 92 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
0 1 30 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
0 1 30 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
0 1 92 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
0 1 30 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
0 1 30 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
0 1 92 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
0 1 92 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
0 1 30 [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec
0 1 92 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
0 1 92 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
0 1 92 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
0 1 30 [k8s.io] MetricsGrabber should grab all metrics from API server.
0 1 30 [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones
0 1 30 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
0 1 92 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
0 1 30 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
0 1 30 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
0 1 92 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
0 1 92 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
0 1 30 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
0 1 30 [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
0 1 30 [k8s.io] Network should set TCP CLOSE_WAIT timeout
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
0 1 30 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
0 1 30 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http
0 1 30 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
0 1 30 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp
0 1 30 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow]
0 1 92 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
0 1 30 [k8s.io] Networking should provide Internet connection for containers [Conformance]
0 1 92 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
0 1 92 [k8s.io] NoExecuteTaintManager [Serial] doesn't evict pod with tolerations from tainted nodes
0 1 92 [k8s.io] NoExecuteTaintManager [Serial] evicts pods from tainted nodes
0 1 92 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
0 1 30 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
0 1 92 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access
0 1 30 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
0 1 92 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
0 1 30 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
0 1 92 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach
0 1 30 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk
0 1 30 [k8s.io] PersistentVolumes:vsphere should test that a file written to the vspehre volume mount before kubelet restart can be read after restart [Disruptive]
0 1 92 [k8s.io] PersistentVolumes:vsphere should test that a vspehre volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns [Disruptive]
0 1 92 [k8s.io] PersistentVolumes:vsphere should test that deleting a PVC before the pod does not cause pod deletion to fail on vsphere volume detach
0 1 92 [k8s.io] PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on vspehre volume detach
0 1 30 [k8s.io] Pod Disks should be able to delete a non-existent PD without error
0 1 92 [k8s.io] Pod Disks should be able to detach from a node which was deleted [Slow] [Disruptive] [Volume]
0 1 30 [k8s.io] Pod Disks should be able to detach from a node whose api object was deleted [Slow] [Disruptive] [Volume]
0 1 92 [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] [Volume]
0 1 92 [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow] [Volume]
0 1 30 [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] [Volume]
0 1 30 [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Volume]
0 1 30 [k8s.io] PodPreset should not modify the pod on conflict
0 1 92 [k8s.io] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance]
0 1 30 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
0 1 92 [k8s.io] Pods should be submitted and removed [Conformance]
0 1 30 [k8s.io] Pods should contain environment variables for services [Conformance]
0 1 30 [k8s.io] Pods should support remote command execution over websockets
0 1 92 [k8s.io] Pods should support retrieving logs from the container over websockets
0 1 30 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
0 1 92 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
0 1 92 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 should support forwarding over websockets
0 1 92 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects NO client request should support a client that connects, sends DATA, and disconnects
0 1 30 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
0 1 30 [k8s.io] PrivilegedPod should enable privileged commands
0 1 30 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
0 1 30 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
0 1 92 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
0 1 30 [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout [Conformance]
0 1 30 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
0 1 92 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
0 1 92 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
0 1 30 [k8s.io] Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
0 1 92 [k8s.io] Projected should be consumable from pods in volume as non-root [Conformance] [Volume]
0 1 92 [k8s.io] Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
0 1 30 [k8s.io] Projected should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
0 1 92 [k8s.io] Projected should be consumable from pods in volume with mappings as non-root [Conformance] [Volume]
0 1 30 [k8s.io] Projected should be consumable in multiple volumes in a pod [Conformance] [Volume]
0 1 92 [k8s.io] Projected should be consumable in multiple volumes in the same pod [Conformance] [Volume]
0 1 30 [k8s.io] Projected should project all components that make up the projection API [Conformance] [Volume] [Projection]
0 1 30 [k8s.io] Projected should provide container's cpu limit [Conformance] [Volume]
0 1 30 [k8s.io] Projected should provide container's cpu request [Conformance] [Volume]
0 1 30 [k8s.io] Projected should provide container's memory limit [Conformance] [Volume]
0 1 92 [k8s.io] Projected should provide container's memory request [Conformance] [Volume]
0 1 92 [k8s.io] Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume]
0 1 30 [k8s.io] Projected should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
0 1 30 [k8s.io] Projected should provide podname only [Conformance] [Volume]
0 1 92 [k8s.io] Projected should set DefaultMode on files [Conformance] [Volume]
0 1 92 [k8s.io] Projected should set mode on item file [Conformance] [Volume]
0 1 30 [k8s.io] Projected should update annotations on modification [Conformance] [Volume]
0 1 92 [k8s.io] Projected should update labels on modification [Conformance] [Volume]
0 1 92 [k8s.io] Projected updates should be reflected in volume [Conformance] [Volume]
0 1 92 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
0 1 92 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
0 1 30 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
0 1 30 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
0 1 92 [k8s.io] ReplicaSet should release no longer matching pods
0 1 92 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
0 1 92 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
0 1 92 [k8s.io] ReplicationController should adopt matching pods on creation
0 1 92 [k8s.io] ReplicationController should release no longer matching pods
0 1 30 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
0 1 92 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
0 1 30 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [Volume]
0 1 60 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
0 1 92 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
0 1 92 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
0 1 30 [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
0 1 92 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
0 1 92 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
0 1 30 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
0 1 30 [k8s.io] SchedulerPriorities [Serial] Pod should be prefer scheduled to node that satisify the NodeAffinity
0 1 30 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that don't match the PodAntiAffinity terms
0 1 30 [k8s.io] SchedulerPriorities [Serial] Pod should be schedule to node that satisify the PodAffinity
0 1 92 [k8s.io] SchedulerPriorities [Serial] Pod should perfer to scheduled to nodes pod can tolerate
0 1 90 [k8s.io] SchedulerPriorities [Serial] Pods created by ReplicationController should spread to different node
0 1 92 [k8s.io] SchedulerPriorities [Serial] Pods should be scheduled to low resource use rate node
0 1 30 [k8s.io] Secrets optional updates should be reflected in volume [Conformance] [Volume]
0 1 92 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [Volume]
0 1 30 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
0 1 92 [k8s.io] Secrets should be consumable from pods in volume [Conformance] [Volume]
0 1 30 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] [Volume]
0 1 30 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
0 1 30 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] [Volume]
0 1 92 [k8s.io] Secrets should be consumable via the environment [Conformance]
0 1 92 [k8s.io] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
0 1 92 [k8s.io] Servers with support for Table transformation should return pod details
0 1 92 [k8s.io] Service endpoints latency should not be very high [Conformance]
0 1 92 [k8s.io] ServiceAccounts should allow opting out of API token automount [Conformance]
0 1 92 [k8s.io] ServiceAccounts should ensure a single API token exists
0 1 92 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
0 1 92 [k8s.io] Services should be able to change the type and ports of a service [Slow]
0 1 30 [k8s.io] Services should be able to create a functioning NodePort service
0 1 30 [k8s.io] Services should be able to up and down services
0 1 78 [k8s.io] Services should check NodePort out-of-range
0 1 92 [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
0 1 92 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
0 1 30 [k8s.io] Services should prevent NodePort collisions
0 1 92 [k8s.io] Services should release NodePorts on delete
0 1 30 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
0 1 92 [k8s.io] Services should use same NodePort with same port but different protocols
0 1 92 [k8s.io] Services should work after restarting apiserver [Disruptive]
0 1 92 [k8s.io] Services should work after restarting kube-proxy [Disruptive]
0 1 60 [k8s.io] SSH should SSH to all nodes and run commands
0 1 30 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods
0 1 92 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy
0 1 74 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should allow template updates
0 1 30 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
0 1 30 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
0 1 30 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset
0 1 92 [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node
0 1 92 [k8s.io] Sysctls should support unsafe sysctls which are actually whitelisted
0 1 30 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
0 1 30 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
0 1 30 [k8s.io] Volume Disk Format [Volumes] verify disk format type - thin is honored for dynamically provisioned pv using storageclass
0 1 92 [k8s.io] Volume Disk Format [Volumes] verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass
0 1 30 [k8s.io] Volume Placement [Volume] should create and delete pod with multiple volumes from different datastore
0 1 30 [k8s.io] Volume Placement [Volume] should create and delete pod with multiple volumes from same datastore
0 1 92 [k8s.io] Volume Placement [Volume] should create and delete pod with the same volume source attach/detach to different worker nodes
0 1 92 [k8s.io] Volume Placement [Volume] should create and delete pod with the same volume source on the same worker node
0 1 92 [k8s.io] Volume Placement [Volume] test back to back pod creation and deletion with different volume sources on the same worker node
0 1 92 [k8s.io] Volumes [Volume] [k8s.io] ConfigMap should be mountable
0 1 92 [k8s.io] Volumes [Volume] [k8s.io] NFS should be mountable
0 1 30 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify an existing and compatible SPBM policy is honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify an if a SPBM policy and VSAN capabilities cannot be honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify if a non-existing SPBM policy is not honored for dynamically provisioned pvc using storageclass
0 1 30 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify if a SPBM policy is not honored on a non-compatible datastore for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify VSAN storage capability with invalid capability name objectSpaceReserve is not honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify VSAN storage capability with invalid diskStripes value is not honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify VSAN storage capability with non-vsan datastore is not honored for dynamically provisioned pvc using storageclass
0 1 30 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify VSAN storage capability with valid diskStripes and objectSpaceReservation values and a VSAN datastore is honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vSphere Storage policy support for dynamic provisioning [Volume] verify VSAN storage capability with valid objectSpaceReservation and iopsLimit values is honored for dynamically provisioned pvc using storageclass
0 1 92 [k8s.io] vsphere Volume fstype [Volume] verify disk format type - default value should be ext4
0 1 92 [k8s.io] vsphere Volume fstype [Volume] verify fstype - ext3 formatted volume
0 1 30 [k8s.io] vsphere volume operations storm [Volume] should create pod with many volumes and verify no attach call fails
0 1 31008 SkewTest
1 0 516 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
1 0 1515 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
1 0 159 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
1 0 7 [k8s.io] Cadvisor should be healthy on every node.
1 0 11 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] [Volume]
1 0 9 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
1 0 9 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] [Volume]
1 0 9 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
1 0 33 [k8s.io] Daemon set [Serial] should run and stop complex daemon
1 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
1 0 12 [k8s.io] Deployment deployment can avoid hash collisions
1 0 14 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
1 0 15 [k8s.io] Deployment deployment should label adopted RSs and pods
1 0 29 [k8s.io] Deployment iterative rollouts should eventually progress
1 0 11 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
1 0 33 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
1 0 85 [k8s.io] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
1 0 32 [k8s.io] DNS configMap federations should be able to change federation configuration [Slow][Serial]
1 0 29 [k8s.io] DNS should provide DNS for the cluster [Conformance]
1 0 9 [k8s.io] Downward API volume should provide container's cpu limit [Conformance] [Volume]
1 0 9 [k8s.io] Downward API volume should provide container's cpu request [Conformance] [Volume]
1 0 11 [k8s.io] Downward API volume should provide container's memory request [Conformance] [Volume]
1 0 9 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] [Volume]
1 0 9 [k8s.io] Downward API volume should provide podname only [Conformance] [Volume]
1 0 9 [k8s.io] Downward API volume should set mode on item file [Conformance] [Volume]
1 0 30 [k8s.io] Downward API volume should update annotations on modification [Conformance] [Volume]
1 0 308 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner Default should be disabled by changing the default annotation[Slow] [Serial] [Disruptive] [Volume]
1 0 66 [k8s.io] Dynamic Provisioning [k8s.io] DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow] [Volume]
1 0 9 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] [Volume]
1 0 9 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] [Volume]
1 0 9 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] [Volume]
1 0 9 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] [Volume]
1 0 9 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] [Volume]
1 0 9 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] [Volume]
1 0 13 [k8s.io] EmptyDir wrapper volumes should not conflict [Volume]
1 0 29 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
1 0 17 [k8s.io] Garbage collector should delete pods created by rc when not orphaning
1 0 42 [k8s.io] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
1 0 12 [k8s.io] Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true
1 0 7 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
1 0 11 [k8s.io] HostPath should support existing single file subPath [Volume]
1 0 9 [k8s.io] HostPath should support subPath [Volume]
1 0 31 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
1 0 11 [k8s.io] Job should delete a job
1 0 15 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
1 0 17 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
1 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
1 0 23 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
1 0 7 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
1 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
1 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
1 0 26 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
1 0 25 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
1 0 11 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
1 0 13 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
1 0 21 [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config
1 0 56 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
1 0 43 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
1 0 42 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
1 0 1208 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
1 0 14 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
1 0 58 [k8s.io] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s, scaling up to 1 pods per node
1 0 20 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
1 0 64 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
1 0 64 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
1 0 72 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http
1 0 48 [k8s.io] Networking should check kube-proxy urls
1 0 470 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes
1 0 537 [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes
1 0 145 [k8s.io] NoExecuteTaintManager [Serial] eventually evict pod with finite tolerations from tainted nodes
1 0 158 [k8s.io] NoExecuteTaintManager [Serial] removing taint cancels eviction
1 0 36 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted. [Volume]
1 0 25 [k8s.io] PersistentVolumes [k8s.io] PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
1 0 96 [k8s.io] PersistentVolumes:GCEPD [Volume] should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach
1 0 214 [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] [Volume]
1 0 129 [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] [Volume]
1 0 26 [k8s.io] Pods should be updated [Conformance]
1 0 1633 [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
1 0 25 [k8s.io] Pods should get a host IP [Conformance]
1 0 426 [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
1 0 34 [k8s.io] Port forwarding [k8s.io] With a server listening on 0.0.0.0 [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
1 0 34 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends DATA, and disconnects
1 0 33 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost [k8s.io] that expects a client request should support a client that connects, sends NO DATA, and disconnects
1 0 31 [k8s.io] Port forwarding [k8s.io] With a server listening on localhost should support forwarding over websockets
1 0 129 [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow]
1 0 9 [k8s.io] Projected should be consumable from pods in volume with mappings and Item Mode set [Conformance] [Volume]
1 0 7 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
1 0 7 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
1 0 7 [k8s.io] Proxy version v1 should proxy to cadvisor
1 0 32 [k8s.io] ReplicaSet should adopt matching pods on creation
1 0 14 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
1 0 14 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
1 0 80 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
1 0 13 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
1 0 13 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [Volume]
1 0 15 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
1 0 13 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
1 0 19 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
1 0 23 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
1 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
1 0 67 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
1 0 89 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
1 0 86 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
1 0 87 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
1 0 87 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
1 0 87 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
1 0 75 [k8s.io] SchedulerPriorities [Serial] Pod should avoid to schedule to node that have avoidPod annotation
1 0 9 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] [Volume]
1 0 9 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] [Volume]
1 0 7 [k8s.io] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata
1 0 144 [k8s.io] Services should create endpoints for unready pods
1 0 7 [k8s.io] Services should provide secure master service [Conformance]
1 0 12 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
1 0 23 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
1 0 65 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
1 0 7 [k8s.io] Sysctls should reject invalid sysctls
1 0 2187 [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
1 0 10 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
1 0 15 CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
1 0 0 Deferred TearDown
1 0 0 DiffResources
1 0 5 DumpClusterLogs
1 0 88 Extract
1 0 0 get kubeconfig
1 0 0 kubectl version
1 0 0 list nodes
1 0 5 ListResources After
1 0 5 ListResources Before
1 0 6 ListResources Down
1 0 9 ListResources Up
1 0 151 TearDown
1 0 8 TearDown Previous
1 0 265 Up
1 0 2329 UpgradeTest