Kubernetes 24-Hour Test Report

ci-kubernetes-soak-gci-gce-1.4-test

Passed Failed Avg Time (s) Test
0 4 1208 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node
0 4 1505 [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node
0 4 136 [k8s.io] Services should be able to change the type and ports of a service [Slow]
0 4 1214 [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow]
0 4 19830 Test
3 1 34 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
4 0 477 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
4 0 726 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
4 0 542 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
4 0 628 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
4 0 1103 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
4 0 1328 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
4 0 203 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
4 0 148 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
4 0 178 [k8s.io] Addon update should propagate add-on file changes [Slow]
4 0 5 [k8s.io] Cadvisor should be healthy on every node.
4 0 7 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance]
4 0 7 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance]
4 0 8 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance]
4 0 7 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance]
4 0 7 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance]
4 0 7 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
4 0 7 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod
4 0 7 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
4 0 8 [k8s.io] ConfigMap updates should be reflected in volume [Conformance]
4 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon
4 0 27 [k8s.io] Daemon set [Serial] should run and stop complex daemon with node affinity
4 0 25 [k8s.io] Daemon set [Serial] should run and stop simple daemon
4 0 12 [k8s.io] Deployment deployment should create new pods
4 0 14 [k8s.io] Deployment deployment should delete old replica sets
4 0 17 [k8s.io] Deployment deployment should label adopted RSs and pods
4 0 31 [k8s.io] Deployment deployment should support rollback
4 0 30 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
4 0 21 [k8s.io] Deployment deployment should support rollover
4 0 50 [k8s.io] Deployment overlapping deployment should not fight with each other
4 0 12 [k8s.io] Deployment paused deployment should be able to scale
4 0 13 [k8s.io] Deployment paused deployment should be ignored by the controller
4 0 19 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
4 0 17 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
4 0 19 [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order
4 0 17 [k8s.io] DNS should provide DNS for ExternalName services
4 0 9 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
4 0 9 [k8s.io] DNS should provide DNS for services [Conformance]
4 0 9 [k8s.io] DNS should provide DNS for the cluster [Conformance]
4 0 7 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
4 0 7 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
4 0 8 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
4 0 7 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
4 0 7 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars
4 0 7 [k8s.io] Downward API should provide default limits.cpu/memory from node capacity
4 0 7 [k8s.io] Downward API should provide pod IP as an env var
4 0 7 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
4 0 8 [k8s.io] Downward API volume should provide container's cpu limit
4 0 7 [k8s.io] Downward API volume should provide container's cpu request
4 0 7 [k8s.io] Downward API volume should provide container's memory limit
4 0 7 [k8s.io] Downward API volume should provide container's memory request
4 0 7 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set
4 0 7 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set
4 0 7 [k8s.io] Downward API volume should provide podname only [Conformance]
4 0 7 [k8s.io] Downward API volume should set DefaultMode on files [Conformance]
4 0 7 [k8s.io] Downward API volume should set mode on item file [Conformance]
4 0 263 [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow]
4 0 256 [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow]
4 0 7 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance]
4 0 8 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance]
4 0 8 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
4 0 7 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance]
4 0 7 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
4 0 8 [k8s.io] EmptyDir wrapper volumes should becomes running
4 0 25 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
4 0 9 [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods
4 0 9 [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods
4 0 8 [k8s.io] HostPath should give a volume the correct mode [Conformance]
4 0 7 [k8s.io] HostPath should support r/w
4 0 7 [k8s.io] HostPath should support subPath [Conformance]
4 0 24 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
4 0 8 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
4 0 7 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
4 0 65 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
4 0 9 [k8s.io] Job should delete a job
4 0 57 [k8s.io] Job should fail a job
4 0 22 [k8s.io] Job should keep restarting failed pods
4 0 11 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
4 0 11 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
4 0 9 [k8s.io] Job should run a job to completion when tasks succeed
4 0 74 [k8s.io] Job should scale a job down
4 0 49 [k8s.io] Job should scale a job up
4 0 5 [k8s.io] Kibana Logging Instances Is Alive should check that the Kibana logging instance is alive
4 0 82 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
4 0 126 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
4 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
4 0 21 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
4 0 27 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
4 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
4 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
4 0 22 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
4 0 23 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
4 0 29 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
4 0 8 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
4 0 10 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
4 0 12 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
4 0 16 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
4 0 132 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
4 0 23 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
4 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node
4 0 6 [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node
4 0 5 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
4 0 5 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
4 0 5 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
4 0 40 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
4 0 16 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
4 0 24 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
4 0 35 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
4 0 14 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
4 0 23 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
4 0 34 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
4 0 34 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
4 0 39 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
4 0 38 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file
4 0 12 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
4 0 125 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
4 0 5 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
4 0 5 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
4 0 5 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
4 0 5 [k8s.io] MetricsGrabber should grab all metrics from API server.
4 0 6 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
4 0 151 [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
4 0 24 [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted.
4 0 6 [k8s.io] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted.
4 0 10 [k8s.io] Network should set TCP CLOSE_WAIT timeout
4 0 58 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
4 0 55 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
4 0 58 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
4 0 60 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
4 0 57 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http
4 0 59 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp
4 0 57 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http
4 0 69 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp
4 0 59 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http
4 0 58 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp
4 0 71 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http
4 0 66 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp
4 0 78 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow]
4 0 200 [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow]
4 0 55 [k8s.io] Networking should check kube-proxy urls
4 0 7 [k8s.io] Networking should provide Internet connection for containers [Conformance]
4 0 5 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance]
4 0 23 [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors
4 0 167 [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow]
4 0 124 [k8s.io] Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully. [Slow]
4 0 66 [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow]
4 0 197 [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow]
4 0 132 [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow]
4 0 90 [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
4 0 12 [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance]
4 0 11 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
4 0 12 [k8s.io] Pods should be submitted and removed [Conformance]
4 0 7 [k8s.io] Pods should be updated [Conformance]
4 0 1625 [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow]
4 0 9 [k8s.io] Pods should contain environment variables for services [Conformance]
4 0 6 [k8s.io] Pods should get a host IP [Conformance]
4 0 358 [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow]
4 0 7 [k8s.io] Pods should support remote command execution over websockets
4 0 36 [k8s.io] Pods should support retrieving logs from the container over websockets
4 0 30 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance]
4 0 28 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance]
4 0 29 [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance]
4 0 49 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
4 0 38 [k8s.io] PrivilegedPod should test privileged pod
4 0 130 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
4 0 127 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
4 0 27 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
4 0 62 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
4 0 151 [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow]
4 0 40 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
4 0 80 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
4 0 27 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy to cadvisor [Conformance]
4 0 5 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance]
4 0 19 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
4 0 18 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
4 0 26 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
4 0 26 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
4 0 81 [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available
4 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
4 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.
4 0 13 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
4 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
4 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
4 0 11 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
4 0 7 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
4 0 21 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
4 0 21 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
4 0 116 [k8s.io] ScheduledJob should not emit unexpected warnings
4 0 305 [k8s.io] ScheduledJob should not schedule jobs when suspended [Slow]
4 0 340 [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow]
4 0 166 [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent
4 0 133 [k8s.io] ScheduledJob should schedule multiple jobs concurrently
4 0 203 [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
4 0 35 [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
4 0 15 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected
4 0 15 [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid
4 0 26 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work
4 0 30 [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities
4 0 53 [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2
4 0 29 [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
4 0 30 [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
4 0 23 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
4 0 42 [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
4 0 7 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
4 0 7 [k8s.io] Secrets should be consumable from pods in volume [Conformance]
4 0 11 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
4 0 8 [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance]
4 0 7 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance]
4 0 47 [k8s.io] Service endpoints latency should not be very high [Conformance]
4 0 13 [k8s.io] ServiceAccounts should ensure a single API token exists
4 0 13 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
4 0 41 [k8s.io] Services should be able to create a functioning NodePort service
4 0 54 [k8s.io] Services should be able to up and down services
4 0 5 [k8s.io] Services should check NodePort out-of-range
4 0 9 [k8s.io] Services should create endpoints for unready pods
4 0 26 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
4 0 5 [k8s.io] Services should prevent NodePort collisions
4 0 5 [k8s.io] Services should provide secure master service [Conformance]
4 0 37 [k8s.io] Services should release NodePorts on delete
4 0 15 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
4 0 23 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
4 0 5 [k8s.io] Services should use same NodePort with same port but different protocols
4 0 26 [k8s.io] SSH should SSH to all nodes and run commands
4 0 5 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
4 0 5 [k8s.io] Sysctls should reject invalid sysctls
4 0 9 [k8s.io] V1Job should delete a job
4 0 57 [k8s.io] V1Job should fail a job
4 0 21 [k8s.io] V1Job should keep restarting failed pods
4 0 11 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted
4 0 12 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted
4 0 10 [k8s.io] V1Job should run a job to completion when tasks succeed
4 0 74 [k8s.io] V1Job should scale a job down
4 0 49 [k8s.io] V1Job should scale a job up
4 0 8 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
4 0 7 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
4 0 8 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
4 0 8 DumpClusterLogs
4 0 38 Extract
4 0 0 get kubeconfig
4 0 0 IsUp
4 0 0 kubectl version