Kubernetes 24-Hour Test Report

ci-kubernetes-e2e-gke-cvm-1-5-gci-1-7-upgrade-cluster

Passed Failed Avg Time (s) Test
0 12 85 [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation
0 12 43 [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance]
0 12 38 [k8s.io] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance]
0 12 40 [k8s.io] Kubectl client [k8s.io] Kubectl apply should apply a new configuration to an existing RC
0 12 18 [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC
0 12 41 [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
0 12 48 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes
0 12 13 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes
0 12 17 [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes
0 12 16 [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
0 12 13 [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance]
0 12 20 [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance]
0 12 17 [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance]
0 12 18 [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance]
0 12 39 [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance]
0 12 51 [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance]
0 12 47 [k8s.io] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node
0 12 15 [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node
0 12 17 [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance]
0 12 14 [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance]
0 12 44 [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance]
0 12 12 [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes
0 12 46 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec
0 12 44 [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy
0 12 43 [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach
0 12 14 [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward
0 12 17 [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance]
0 12 15 [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance]
0 12 15 [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance]
0 12 58 [k8s.io] Network should set TCP CLOSE_WAIT timeout
0 12 104 [k8s.io] Networking should check kube-proxy urls
0 12 33 [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance]
0 12 37 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance]
0 12 31 [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance]
0 12 37 [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance]
0 12 55 [k8s.io] Services should be able to create a functioning NodePort service
0 12 357 [k8s.io] Services should be able to up and down services
0 12 435 [k8s.io] Services should create endpoints for unready pods
0 12 176 [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP
0 12 380 [k8s.io] Services should release NodePorts on delete
0 12 608 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed
0 12 625 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy
0 12 1184 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale
0 12 134 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity
0 12 1938 Test
3 9 65 [k8s.io] DNS should provide DNS for the cluster [Conformance]
8 4 2151 [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
8 4 1746 hpa-upgrade
8 4 1459 persistent-volume-upgrade
8 4 2300 UpgradeTest
9 3 981 cluster-upgrade
9 3 1412 configmap-upgrade
9 3 1407 daemonset-upgrade
9 3 1407 job-upgrade
9 3 1412 secret-upgrade
9 3 1409 statefulset-upgrade
11 1 183 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods
11 1 157 [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod
11 1 46 [k8s.io] Cadvisor should be healthy on every node.
11 1 20 [k8s.io] ConfigMap should be consumable from pods in volume [Conformance]
11 1 20 [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance]
11 1 19 [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance]
11 1 24 [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance]
11 1 19 [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance]
11 1 24 [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance]
11 1 20 [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance]
11 1 44 [k8s.io] ConfigMap should be consumable via environment variable [Conformance]
11 1 65 [k8s.io] ConfigMap updates should be reflected in volume [Conformance]
11 1 23 [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods
11 1 18 [k8s.io] Deployment deployment should create new pods
11 1 77 [k8s.io] Deployment deployment should delete old replica sets
11 1 23 [k8s.io] Deployment deployment should label adopted RSs and pods
11 1 36 [k8s.io] Deployment deployment should support rollback
11 1 39 [k8s.io] Deployment deployment should support rollback when there's replica set with no revision
11 1 64 [k8s.io] Deployment deployment should support rollover
11 1 53 [k8s.io] Deployment iterative rollouts should eventually progress
11 1 91 [k8s.io] Deployment lack of progress should be reported in the deployment status
11 1 44 [k8s.io] Deployment paused deployment should be able to scale
11 1 61 [k8s.io] Deployment paused deployment should be ignored by the controller
11 1 32 [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones
11 1 67 [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones
11 1 28 [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order
11 1 116 [k8s.io] Deployment scaled rollout deployment should not block on annotation check
11 1 73 [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction
11 1 42 [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
11 1 52 [k8s.io] DisruptionController evictions: no PDB => should allow an eviction
11 1 125 [k8s.io] DisruptionController evictions: too few pods, absolute => should not allow an eviction
11 1 89 [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
11 1 16 [k8s.io] DisruptionController should create a PodDisruptionBudget
11 1 45 [k8s.io] DisruptionController should update PodDisruptionBudget status
11 1 134 [k8s.io] DNS config map should be able to change configuration
11 1 46 [k8s.io] DNS should provide DNS for ExternalName services
11 1 64 [k8s.io] DNS should provide DNS for services [Conformance]
11 1 54 [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance]
11 1 51 [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance]
11 1 22 [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance]
11 1 55 [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance]
11 1 19 [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance]
11 1 17 [k8s.io] Downward API should provide default limits.cpu/memory from node allocatable [Conformance]
11 1 51 [k8s.io] Downward API should provide pod IP as an env var [Conformance]
11 1 18 [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance]
11 1 49 [k8s.io] Downward API volume should provide container's cpu limit [Conformance]
11 1 21 [k8s.io] Downward API volume should provide container's cpu request [Conformance]
11 1 24 [k8s.io] Downward API volume should provide container's memory limit [Conformance]
11 1 18 [k8s.io] Downward API volume should provide container's memory request [Conformance]
11 1 17 [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance]
11 1 43 [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance]
11 1 21 [k8s.io] Downward API volume should provide podname only [Conformance]
11 1 48 [k8s.io] Downward API volume should set DefaultMode on files [Conformance]
11 1 16 [k8s.io] Downward API volume should set mode on item file [Conformance]
11 1 62 [k8s.io] Downward API volume should update annotations on modification [Conformance]
11 1 54 [k8s.io] Downward API volume should update labels on modification [Conformance]
11 1 21 [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance]
11 1 21 [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance]
11 1 22 [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance]
11 1 19 [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance]
11 1 21 [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance]
11 1 20 [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance]
11 1 19 [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance]
11 1 48 [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance]
11 1 18 [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance]
11 1 24 [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance]
11 1 17 [k8s.io] EmptyDir volumes should support (root,0777,tmpfs) [Conformance]
11 1 50 [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance]
11 1 17 [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance]
11 1 102 [k8s.io] EmptyDir wrapper volumes should not conflict
11 1 57 [k8s.io] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
11 1 22 [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods
11 1 15 [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs
11 1 22 [k8s.io] HostPath should give a volume the correct mode [Conformance]
11 1 51 [k8s.io] HostPath should support r/w
11 1 18 [k8s.io] HostPath should support subPath
11 1 33 [k8s.io] InitContainer should invoke init containers on a RestartAlways pod
11 1 19 [k8s.io] InitContainer should invoke init containers on a RestartNever pod
11 1 22 [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod
11 1 72 [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod
11 1 21 [k8s.io] Job should delete a job
11 1 63 [k8s.io] Job should fail a job
11 1 32 [k8s.io] Job should keep restarting failed pods
11 1 63 [k8s.io] Job should run a job to completion when tasks sometimes fail and are locally restarted
11 1 80 [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted
11 1 28 [k8s.io] Job should run a job to completion when tasks succeed
11 1 88 [k8s.io] Job should scale a job down
11 1 72 [k8s.io] Job should scale a job up
11 1 99 [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
11 1 56 [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance]
11 1 26 [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive
11 1 52 [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
11 1 13 [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager.
11 1 13 [k8s.io] MetricsGrabber should grab all metrics from a Kubelet.
11 1 13 [k8s.io] MetricsGrabber should grab all metrics from a Scheduler.
11 1 46 [k8s.io] MetricsGrabber should grab all metrics from API server.
11 1 78 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance]
11 1 107 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance]
11 1 80 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance]
11 1 78 [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance]
11 1 45 [k8s.io] Networking should provide Internet connection for containers [Conformance]
11 1 44 [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services
11 1 96 [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors
11 1 47 [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance]
11 1 32 [k8s.io] Pods should be submitted and removed [Conformance]
11 1 65 [k8s.io] Pods should be updated [Conformance]
11 1 41 [k8s.io] Pods should contain environment variables for services [Conformance]
11 1 56 [k8s.io] Pods should get a host IP [Conformance]
11 1 83 [k8s.io] Pods should support remote command execution over websockets
11 1 49 [k8s.io] Pods should support retrieving logs from the container over websockets
11 1 56 [k8s.io] PreStop should call prestop when killing a pod [Conformance]
11 1 79 [k8s.io] PrivilegedPod should test privileged pod
11 1 131 [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [Conformance]
11 1 129 [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
11 1 63 [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance]
11 1 63 [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance]
11 1 52 [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance]
11 1 119 [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance]
11 1 39 [k8s.io] Proxy version v1 should proxy logs on node [Conformance]
11 1 38 [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
11 1 46 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance]
11 1 38 [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
11 1 45 [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance]
11 1 46 [k8s.io] Proxy version v1 should proxy to cadvisor
11 1 13 [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource
11 1 36 [k8s.io] ReplicaSet should serve a basic image on each replica with a private image
11 1 66 [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
11 1 21 [k8s.io] ReplicaSet should surface a failure condition on a common issue like exceeded quota
11 1 87 [k8s.io] ReplicationController should serve a basic image on each replica with a private image
11 1 40 [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance]
11 1 17 [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota
11 1 22 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap.
11 1 54 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.
11 1 23 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod.
11 1 18 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller.
11 1 51 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret.
11 1 19 [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service.
11 1 20 [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.
11 1 61 [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope.
11 1 29 [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes.
11 1 30 [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace
11 1 49 [k8s.io] Secrets should be consumable from pods in env vars [Conformance]
11 1 46 [k8s.io] Secrets should be consumable from pods in volume [Conformance]
11 1 16 [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance]
11 1 51 [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance]
11 1 53 [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance]
11 1 17 [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance]
11 1 54 [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance]
11 1 18 [k8s.io] ServiceAccounts should ensure a single API token exists
11 1 24 [k8s.io] ServiceAccounts should mount an API token into pods [Conformance]
11 1 11 [k8s.io] Services should check NodePort out-of-range
11 1 49 [k8s.io] Services should prevent NodePort collisions
11 1 43 [k8s.io] Services should provide secure master service [Conformance]
11 1 68 [k8s.io] Services should serve a basic endpoint from pods [Conformance]
11 1 40 [k8s.io] Services should serve multiport endpoints from pods [Conformance]
11 1 13 [k8s.io] Services should use same NodePort with same port but different protocols
11 1 44 [k8s.io] SSH should SSH to all nodes and run commands
11 1 31 [k8s.io] Staging client repo client should create pods, delete pods, watch pods
11 1 66 [k8s.io] Stateful Set recreate should recreate evicted statefulset
11 1 148 [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates
11 1 16 [k8s.io] Sysctls should reject invalid sysctls
11 1 23 [k8s.io] V1Job should delete a job
11 1 63 [k8s.io] V1Job should fail a job
11 1 27 [k8s.io] V1Job should keep restarting failed pods
11 1 29 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted
11 1 67 [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted
11 1 24 [k8s.io] V1Job should run a job to completion when tasks succeed
11 1 85 [k8s.io] V1Job should scale a job down
11 1 77 [k8s.io] V1Job should scale a job up
11 1 16 [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance]
11 1 50 [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance]
11 1 49 [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance]
8 1 111 [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable
8 1 116 [k8s.io] GCP Volumes [k8s.io] NFSv3 should be mountable for NFSv3 [Volume]
8 1 180 [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4
0 1 54 [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL
0 1 421 [k8s.io] CronJob should replace jobs when ReplaceConcurrent
0 1 30 [k8s.io] Deployment overlapping deployment should not fight with each other
0 1 422 [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
0 1 62 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run CronJob should create a CronJob
0 1 77 [k8s.io] Kubectl alpha client [k8s.io] Kubectl run ScheduledJob should create a ScheduledJob
0 1 92 [k8s.io] Mesos applies slave attributes as labels
0 1 30 [k8s.io] Mesos schedules pods annotated with roles on correct slaves
0 1 92 [k8s.io] Monitoring should verify monitoring pods and all cluster nodes are available on influxdb using heapster.
0 1 62 [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones
0 1 22 [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout [Conformance]
0 1 30 [k8s.io] Sysctls should not launch unsafe, but not explicitly enabled sysctls on the node
12 0 0 Deferred TearDown
12 0 15 DumpClusterLogs
12 0 91 Extract
12 0 0 get kubeconfig
12 0 1694 ingress-upgrade
12 0 0 IsUp
12 0 2 kubectl version
12 0 0 list nodes
12 0 6 ListResources After
12 0 7 ListResources Before
12 0 7 ListResources Down
12 0 10 ListResources Up
12 0 1407 service-upgrade
12 0 238 TearDown
12 0 86 TearDown Previous
12 0 258 Up
11 0 20 [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance]
11 0 64 [k8s.io] Service endpoints latency should not be very high [Conformance]
11 0 0 DiffResources