{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "MhoQ0WE77laV" }, "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "_ckMIh7O7s6D" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "jYysdyb-CaWM" }, "source": [ "# Custom training with tf.distribute.Strategy" ] }, { "cell_type": "markdown", "metadata": { "id": "S5Uhzt6vVIB2" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "FbVhjPpzn6BM" }, "source": [ "This tutorial demonstrates how to use `tf.distribute.Strategy`—a TensorFlow API that provides an abstraction for [distributing your training](../../guide/distributed_training.ipynb) across multiple processing units (GPUs, multiple machines, or TPUs)—with custom training loops. In this example, you will train a simple convolutional neural network on the [Fashion MNIST dataset](https://github.com/zalandoresearch/fashion-mnist) containing 70,000 images of size 28 x 28.\n", "\n", "[Custom training loops](../customization/custom_training_walkthrough.ipynb) provide flexibility and a greater control on training. They also make it easier to debug the model and the training loop." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dzLKpmZICaWN" }, "outputs": [], "source": [ "# Import TensorFlow\n", "import tensorflow as tf\n", "\n", "# Helper libraries\n", "import numpy as np\n", "import os\n", "\n", "print(tf.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "MM6W__qraV55" }, "source": [ "## Download the Fashion MNIST dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7MqDQO0KCaWS" }, "outputs": [], "source": [ "fashion_mnist = tf.keras.datasets.fashion_mnist\n", "\n", "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()\n", "\n", "# Add a dimension to the array -> new shape == (28, 28, 1)\n", "# This is done because the first layer in our model is a convolutional\n", "# layer and it requires a 4D input (batch_size, height, width, channels).\n", "# batch_size dimension will be added later on.\n", "train_images = train_images[..., None]\n", "test_images = test_images[..., None]\n", "\n", "# Scale the images to the [0, 1] range.\n", "train_images = train_images / np.float32(255)\n", "test_images = test_images / np.float32(255)" ] }, { "cell_type": "markdown", "metadata": { "id": "4AXoHhrsbdF3" }, "source": [ "## Create a strategy to distribute the variables and the graph" ] }, { "cell_type": "markdown", "metadata": { "id": "5mVuLZhbem8d" }, "source": [ "How does `tf.distribute.MirroredStrategy` strategy work?\n", "\n", "* All the variables and the model graph are replicated across the replicas.\n", "* Input is evenly distributed across the replicas.\n", "* Each replica calculates the loss and gradients for the input it received.\n", "* The gradients are synced across all the replicas by **summing** them.\n", "* After the sync, the same update is made to the copies of the variables on each replica.\n", "\n", "Note: You can put all the code below inside a single scope. This example divides it into several code cells for illustration purposes.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "F2VeZUWUj5S4" }, "outputs": [], "source": [ "# If the list of devices is not specified in\n", "# `tf.distribute.MirroredStrategy` constructor, they will be auto-detected.\n", "strategy = tf.distribute.MirroredStrategy()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZngeM_2o0_JO" }, "outputs": [], "source": [ "print('Number of devices: {}'.format(strategy.num_replicas_in_sync))" ] }, { "cell_type": "markdown", "metadata": { "id": "k53F5I_IiGyI" }, "source": [ "## Setup input pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jwJtsCQhHK-E" }, "outputs": [], "source": [ "BUFFER_SIZE = len(train_images)\n", "\n", "BATCH_SIZE_PER_REPLICA = 64\n", "GLOBAL_BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync\n", "\n", "EPOCHS = 10" ] }, { "cell_type": "markdown", "metadata": { "id": "J7fj3GskHC8g" }, "source": [ "Create the datasets and distribute them:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WYrMNNDhAvVl" }, "outputs": [], "source": [ "train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_labels)).shuffle(BUFFER_SIZE).batch(GLOBAL_BATCH_SIZE)\n", "test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)\n", "\n", "train_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)\n", "test_dist_dataset = strategy.experimental_distribute_dataset(test_dataset)" ] }, { "cell_type": "markdown", "metadata": { "id": "bAXAo_wWbWSb" }, "source": [ "## Create the model\n", "\n", "Create a model using `tf.keras.Sequential`. You can also use the [Model Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) or the [functional API](https://www.tensorflow.org/guide/keras/functional) to do this." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9ODch-OFCaW4" }, "outputs": [], "source": [ "def create_model():\n", " regularizer = tf.keras.regularizers.L2(1e-5)\n", " model = tf.keras.Sequential([\n", " tf.keras.layers.Conv2D(32, 3,\n", " activation='relu',\n", " kernel_regularizer=regularizer),\n", " tf.keras.layers.MaxPooling2D(),\n", " tf.keras.layers.Conv2D(64, 3,\n", " activation='relu',\n", " kernel_regularizer=regularizer),\n", " tf.keras.layers.MaxPooling2D(),\n", " tf.keras.layers.Flatten(),\n", " tf.keras.layers.Dense(64,\n", " activation='relu',\n", " kernel_regularizer=regularizer),\n", " tf.keras.layers.Dense(10, kernel_regularizer=regularizer)\n", " ])\n", "\n", " return model" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9iagoTBfijUz" }, "outputs": [], "source": [ "# Create a checkpoint directory to store the checkpoints.\n", "checkpoint_dir = './training_checkpoints'\n", "checkpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")" ] }, { "cell_type": "markdown", "metadata": { "id": "0-VVTqDEICrl" }, "source": [ "## Define the loss function\n", "\n", "Recall that the loss function consists of one or two parts:\n", "\n", " * The **prediction loss** measures how far off the model's predictions are from the training labels for a batch of training examples. It is computed for each labeled example and then reduced across the batch by computing the average value.\n", " * Optionally, **regularization loss** terms can be added to the prediction loss, to steer the model away from overfitting the training data. A common choice is L2 regularization, which adds a small fixed multiple of the sum of squares of all model weights, independent of the number of examples. The model above uses L2 regularization to demonstrate its handling in the training loop below.\n", "\n", "For training on a single machine with a single GPU/CPU, this works as follows:\n", "\n", " * The prediction loss is computed for each example in the batch, summed across the batch, and then divided by the batch size.\n", " * The regularization loss is added to the prediction loss.\n", " * The gradient of the total loss is computed w.r.t. each model weight, and the optimizer updates each model weight from the corresponding gradient.\n", "\n", "With `tf.distribute.Strategy`, the input batch is split between replicas.\n", "For example, let's say you have 4 GPUs, each with one replica of the model. One batch of 256 input examples is distributed evenly across the 4 replicas, so each replica gets a batch of size 64: We have `256 = 4*64`, or generally `GLOBAL_BATCH_SIZE = num_replicas_in_sync * BATCH_SIZE_PER_REPLICA`.\n", "\n", "Each replica computes the loss from the training examples it gets and computes the gradients of the loss w.r.t. each model weight. The optimizer takes care that these **gradients are summed up across replicas** before using them to update the copies of the model weights on each replica.\n", "\n", "*So, how should the loss be calculated when using a `tf.distribute.Strategy`?*\n", "\n", " * Each replica computes the prediction loss for all examples distributed to it, sums up the results and divides them by `num_replicas_in_sync * BATCH_SIZE_PER_REPLICA`, or equivently, `GLOBAL_BATCH_SIZE`.\n", " * Each replica compues the regularization loss(es) and divides them by\n", " `num_replicas_in_sync`.\n", "\n", "Compared to non-distributed training, all per-replica loss terms are scaled down by a factor of `1/num_replicas_in_sync`. On the other hand, all loss terms -- or rather, their gradients -- are summed across that number of replicas before the optimizer applies them. In effect, the optimizer on each replica uses the same gradients as if a non-distributed computation with `GLOBAL_BATCH_SIZE` had happened. This is consistent with the distributed and undistributed behavior of Keras `Model.fit`. See the [Distributed training with Keras](./keras.ipynb) tutorial on how a larger gloabl batch size enables to scale up the learning rate." ] }, { "cell_type": "markdown", "metadata": { "id": "e-wlFFZbP33n" }, "source": [ "*How to do this in TensorFlow?*\n", "\n", " * Loss reduction and scaling is done automatically in Keras `Model.compile` and `Model.fit`\n", "\n", " * If you're writing a custom training loop, as in this tutorial, you should sum the per-example losses and divide the sum by the global batch size using `tf.nn.compute_average_loss`, which takes the per-example losses and\n", "optional sample weights as arguments and returns the scaled loss.\n", "\n", " * If using `tf.keras.losses` classes (as in the example below), the loss reduction needs to be explicitly specified to be one of `NONE` or `SUM`. The default `AUTO` and `SUM_OVER_BATCH_SIZE` are disallowed outside `Model.fit`.\n", " * `AUTO` is disallowed because the user should explicitly think about what reduction they want to make sure it is correct in the distributed case.\n", " * `SUM_OVER_BATCH_SIZE` is disallowed because currently it would only divide by per replica batch size, and leave the dividing by number of replicas to the user, which might be easy to miss. So, instead, you need to do the reduction yourself explicitly.\n", "\n", " * If you're writing a custom training loop for a model with a non-empty list of `Model.losses` (e.g., weight regularizers), you should sum them up and divide the sum by the number of replicas. You can do this by using the `tf.nn.scale_regularization_loss` function. The model code itself remains unaware of the number of replicas.\n", "\n", " However, models can define input-dependent regularization losses with Keras APIs such as `Layer.add_loss(...)` and `Layer(activity_regularizer=...)`. For `Layer.add_loss(...)`, it falls on the modeling code to perform the division of the summed per-example terms by the per-replica(!) batch size, e.g., by using `tf.math.reduce_mean()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "R144Wci782ix" }, "outputs": [], "source": [ "with strategy.scope():\n", " # Set reduction to `NONE` so you can do the reduction yourself.\n", " loss_object = tf.keras.losses.SparseCategoricalCrossentropy(\n", " from_logits=True,\n", " reduction=tf.keras.losses.Reduction.NONE)\n", " def compute_loss(labels, predictions, model_losses):\n", " per_example_loss = loss_object(labels, predictions)\n", " loss = tf.nn.compute_average_loss(per_example_loss)\n", " if model_losses:\n", " loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))\n", " return loss" ] }, { "cell_type": "markdown", "metadata": { "id": "6pM96bqQY52D" }, "source": [ "### Special cases\n", "\n", "Advanced users should also consider the following special cases.\n", "\n", " * Input batches shorter than `GLOBAL_BATCH_SIZE` create unpleasant corner cases in several places. In practice, it often works best to avoid them by allowing batches to span epoch boundaries using `Dataset.repeat().batch()` and defining approximate epochs by step counts, not dataset ends. Alternatively, `Dataset.batch(drop_remainder=True)` maintains the notion of epoch but drops the last few examples.\n", "\n", " For illustration, this example goes the harder route and allows short batches, so that each training epoch contains each training example exactly once.\n", " \n", " Which denominator should be used by `tf.nn.compute_average_loss()`?\n", "\n", " * By default, in the example code above and equivalently in `Keras.fit()`, the sum of prediction losses is divided by `num_replicas_in_sync` times the actual batch size seen on the replica (with empty batches silently ignored). This preserves the balance between the prediction loss on the one hand and the regularization losses on the other hand. It is particularly appropriate for models that use input-dependent regularization losses. Plain L2 regularization just superimposes weight decay onto the gradients of the prediction loss and is less in need of such a balance.\n", " * In practice, many custom training loops pass as a constant Python value into `tf.nn.compute_average_loss(..., global_batch_size=GLOBAL_BATCH_SIZE)` to use it as the denominator. This preserves the relative weighting of training examples between batches. Without it, the smaller denominator in short batches effectively upweights the examples in those. (Before TensorFlow 2.13, this was also needed to avoid NaNs in case some replica received an actual batch size of zero.)\n", " \n", " Both options are equivalent if short batches are avoided, as suggested above.\n", "\n", " * Multi-dimensional `labels` require you to average the `per_example_loss` across the number of predictions in each example. Consider a classification task for all pixels of an input image, with `predictions` of shape `(batch_size, H, W, n_classes)` and `labels` of shape `(batch_size, H, W)`. You will need to update `per_example_loss` like: `per_example_loss /= tf.cast(tf.reduce_prod(tf.shape(labels)[1:]), tf.float32)`\n", "\n", " Caution: **Verify the shape of your loss**.\n", " Loss functions in `tf.losses`/`tf.keras.losses` typically\n", " return the average over the last dimension of the input. The loss\n", " classes wrap these functions. Passing `reduction=Reduction.NONE` when\n", " creating an instance of a loss class means \"no **additional** reduction\".\n", " For categorical losses with an example input shape of `[batch, W, H, n_classes]` the `n_classes`\n", " dimension is reduced. For pointwise losses like\n", " `losses.mean_squared_error` or `losses.binary_crossentropy` include a\n", " dummy axis so that `[batch, W, H, 1]` is reduced to `[batch, W, H]`. Without\n", " the dummy axis `[batch, W, H]` will be incorrectly reduced to `[batch, W]`." ] }, { "cell_type": "markdown", "metadata": { "id": "w8y54-o9T2Ni" }, "source": [ "## Define the metrics to track loss and accuracy\n", "\n", "These metrics track the test loss and training and test accuracy. You can use `.result()` to get the accumulated statistics at any time." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zt3AHb46Tr3w" }, "outputs": [], "source": [ "with strategy.scope():\n", " test_loss = tf.keras.metrics.Mean(name='test_loss')\n", "\n", " train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n", " name='train_accuracy')\n", " test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n", " name='test_accuracy')" ] }, { "cell_type": "markdown", "metadata": { "id": "iuKuNXPORfqJ" }, "source": [ "## Training loop" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OrMmakq5EqeQ" }, "outputs": [], "source": [ "# A model, an optimizer, and a checkpoint must be created under `strategy.scope`.\n", "with strategy.scope():\n", " model = create_model()\n", "\n", " optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\n", "\n", " checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3UX43wUu04EL" }, "outputs": [], "source": [ "def train_step(inputs):\n", " images, labels = inputs\n", "\n", " with tf.GradientTape() as tape:\n", " predictions = model(images, training=True)\n", " loss = compute_loss(labels, predictions, model.losses)\n", "\n", " gradients = tape.gradient(loss, model.trainable_variables)\n", " optimizer.apply_gradients(zip(gradients, model.trainable_variables))\n", "\n", " train_accuracy.update_state(labels, predictions)\n", " return loss\n", "\n", "def test_step(inputs):\n", " images, labels = inputs\n", "\n", " predictions = model(images, training=False)\n", " t_loss = loss_object(labels, predictions)\n", "\n", " test_loss.update_state(t_loss)\n", " test_accuracy.update_state(labels, predictions)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gX975dMSNw0e" }, "outputs": [], "source": [ "# `run` replicates the provided computation and runs it\n", "# with the distributed input.\n", "@tf.function\n", "def distributed_train_step(dataset_inputs):\n", " per_replica_losses = strategy.run(train_step, args=(dataset_inputs,))\n", " return strategy.reduce(tf.distribute.ReduceOp.SUM, per_replica_losses,\n", " axis=None)\n", "\n", "@tf.function\n", "def distributed_test_step(dataset_inputs):\n", " return strategy.run(test_step, args=(dataset_inputs,))\n", "\n", "for epoch in range(EPOCHS):\n", " # TRAIN LOOP\n", " total_loss = 0.0\n", " num_batches = 0\n", " for x in train_dist_dataset:\n", " total_loss += distributed_train_step(x)\n", " num_batches += 1\n", " train_loss = total_loss / num_batches\n", "\n", " # TEST LOOP\n", " for x in test_dist_dataset:\n", " distributed_test_step(x)\n", "\n", " if epoch % 2 == 0:\n", " checkpoint.save(checkpoint_prefix)\n", "\n", " template = (\"Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, \"\n", " \"Test Accuracy: {}\")\n", " print(template.format(epoch + 1, train_loss,\n", " train_accuracy.result() * 100, test_loss.result(),\n", " test_accuracy.result() * 100))\n", "\n", " test_loss.reset_states()\n", " train_accuracy.reset_states()\n", " test_accuracy.reset_states()" ] }, { "cell_type": "markdown", "metadata": { "id": "Z1YvXqOpwy08" }, "source": [ "### Things to note in the example above\n", "\n", "* Iterate over the `train_dist_dataset` and `test_dist_dataset` using a `for x in ...` construct.\n", "* The scaled loss is the return value of the `distributed_train_step`. This value is aggregated across replicas using the `tf.distribute.Strategy.reduce` call and then across batches by summing the return value of the `tf.distribute.Strategy.reduce` calls.\n", "* `tf.keras.Metrics` should be updated inside `train_step` and `test_step` that gets executed by `tf.distribute.Strategy.run`.\n", "* `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can do `tf.distribute.Strategy.reduce` to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results` to get the list of values contained in the result, one per local replica.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "-q5qp31IQD8t" }, "source": [ "## Restore the latest checkpoint and test" ] }, { "cell_type": "markdown", "metadata": { "id": "WNW2P00bkMGJ" }, "source": [ "A model checkpointed with a `tf.distribute.Strategy` can be restored with or without a strategy." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pg3B-Cw_cn3a" }, "outputs": [], "source": [ "eval_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(\n", " name='eval_accuracy')\n", "\n", "new_model = create_model()\n", "new_optimizer = tf.keras.optimizers.Adam()\n", "\n", "test_dataset = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(GLOBAL_BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7qYii7KUYiSM" }, "outputs": [], "source": [ "@tf.function\n", "def eval_step(images, labels):\n", " predictions = new_model(images, training=False)\n", " eval_accuracy(labels, predictions)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LeZ6eeWRoUNq" }, "outputs": [], "source": [ "checkpoint = tf.train.Checkpoint(optimizer=new_optimizer, model=new_model)\n", "checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))\n", "\n", "for images, labels in test_dataset:\n", " eval_step(images, labels)\n", "\n", "print('Accuracy after restoring the saved model without strategy: {}'.format(\n", " eval_accuracy.result() * 100))" ] }, { "cell_type": "markdown", "metadata": { "id": "EbcI87EEzhzg" }, "source": [ "## Alternate ways of iterating over a dataset\n", "\n", "### Using iterators\n", "\n", "If you want to iterate over a given number of steps and not through the entire dataset, you can create an iterator using the `iter` call and explicitly call `next` on the iterator. You can choose to iterate over the dataset both inside and outside the `tf.function`. Here is a small snippet demonstrating iteration of the dataset outside the `tf.function` using an iterator.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7c73wGC00CzN" }, "outputs": [], "source": [ "for _ in range(EPOCHS):\n", " total_loss = 0.0\n", " num_batches = 0\n", " train_iter = iter(train_dist_dataset)\n", "\n", " for _ in range(10):\n", " total_loss += distributed_train_step(next(train_iter))\n", " num_batches += 1\n", " average_train_loss = total_loss / num_batches\n", "\n", " template = (\"Epoch {}, Loss: {}, Accuracy: {}\")\n", " print(template.format(epoch + 1, average_train_loss, train_accuracy.result() * 100))\n", " train_accuracy.reset_states()" ] }, { "cell_type": "markdown", "metadata": { "id": "GxVp48Oy0m6y" }, "source": [ "### Iterating inside a `tf.function`\n", "\n", "You can also iterate over the entire input `train_dist_dataset` inside a `tf.function` using the `for x in ...` construct or by creating iterators like you did above. The example below demonstrates wrapping one epoch of training with a `@tf.function` decorator and iterating over `train_dist_dataset` inside the function." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-REzmcXv00qm" }, "outputs": [], "source": [ "@tf.function\n", "def distributed_train_epoch(dataset):\n", " total_loss = 0.0\n", " num_batches = 0\n", " for x in dataset:\n", " per_replica_losses = strategy.run(train_step, args=(x,))\n", " total_loss += strategy.reduce(\n", " tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None)\n", " num_batches += 1\n", " return total_loss / tf.cast(num_batches, dtype=tf.float32)\n", "\n", "for epoch in range(EPOCHS):\n", " train_loss = distributed_train_epoch(train_dist_dataset)\n", "\n", " template = (\"Epoch {}, Loss: {}, Accuracy: {}\")\n", " print(template.format(epoch + 1, train_loss, train_accuracy.result() * 100))\n", "\n", " train_accuracy.reset_states()" ] }, { "cell_type": "markdown", "metadata": { "id": "MuZGXiyC7ABR" }, "source": [ "### Tracking training loss across replicas\n", "\n", "Note: As a general rule, you should use `tf.keras.Metrics` to track per-sample values and avoid values that have been aggregated within a replica.\n", "\n", "Because of the loss scaling computation that is carried out, it's not recommended to use `tf.keras.metrics.Mean` to track the training loss across different replicas.\n", "\n", "For example, if you run a training job with the following characteristics:\n", "\n", "* Two replicas\n", "* Two samples are processed on each replica\n", "* Resulting loss values: [2, 3] and [4, 5] on each replica\n", "* Global batch size = 4\n", "\n", "With loss scaling, you calculate the per-sample value of loss on each replica by adding the loss values, and then dividing by the global batch size. In this case: `(2 + 3) / 4 = 1.25` and `(4 + 5) / 4 = 2.25`.\n", "\n", "If you use `tf.keras.metrics.Mean` to track loss across the two replicas, the result is different. In this example, you end up with a `total` of 3.50 and `count` of 2, which results in `total`/`count` = 1.75 when `result()` is called on the metric. Loss calculated with `tf.keras.Metrics` is scaled by an additional factor that is equal to the number of replicas in sync." ] }, { "cell_type": "markdown", "metadata": { "id": "xisYJaV9KZTN" }, "source": [ "### Guide and examples\n", "\n", "Here are some examples for using distribution strategy with custom training loops:\n", "\n", "1. [Distributed training guide](../../guide/distributed_training)\n", "2. [DenseNet](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/densenet/distributed_train.py) example using `MirroredStrategy`.\n", "1. [BERT](https://github.com/tensorflow/models/blob/master/official/legacy/bert/run_classifier.py) example trained using `MirroredStrategy` and `TPUStrategy`.\n", "This example is particularly helpful for understanding how to load from a checkpoint and generate periodic checkpoints during distributed training etc.\n", "2. [NCF](https://github.com/tensorflow/models/blob/master/official/recommendation/ncf_keras_main.py) example trained using `MirroredStrategy` that can be enabled using the `keras_use_ctl` flag.\n", "3. [NMT](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/nmt_with_attention/distributed_train.py) example trained using `MirroredStrategy`.\n", "\n", "You can find more examples listed under _Examples and tutorials_ in the [Distribution strategy guide](../../guide/distributed_training.ipynb)." ] }, { "cell_type": "markdown", "metadata": { "id": "6hEJNsokjOKs" }, "source": [ "## Next steps\n", "\n", "* Try out the new `tf.distribute.Strategy` API on your models.\n", "* Visit the [Better performance with `tf.function`](../../guide/function.ipynb) and [TensorFlow Profiler](../../guide/profiler.md) guides to learn more about tools to optimize the performance of your TensorFlow models.\n", "* Check out the [Distributed training in TensorFlow](../../guide/distributed_training.ipynb) guide, which provides an overview of the available distribution strategies." ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "custom_training.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }