{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "SB93Ge748VQs" }, "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2025-06-26T11:07:04.263787Z", "iopub.status.busy": "2025-06-26T11:07:04.263187Z", "iopub.status.idle": "2025-06-26T11:07:04.267052Z", "shell.execute_reply": "2025-06-26T11:07:04.266472Z" }, "id": "0sK8X2O9bTlz" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "HEYuO5NFwDK9" }, "source": [ "# Migrating tf.summary usage to TF 2.x\n", "\n", "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "56V5oun18ZdZ" }, "source": [ "> Note: This doc is for people who are already familiar with TensorFlow 1.x TensorBoard and who want to migrate large TensorFlow code bases from TensorFlow 1.x to 2.x. If you're new to TensorBoard, see the [get started](get_started.ipynb) doc instead. If you are using `tf.keras` there may be no action you need to take to upgrade to TensorFlow 2.x. \n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:04.270202Z", "iopub.status.busy": "2025-06-26T11:07:04.269700Z", "iopub.status.idle": "2025-06-26T11:07:07.364008Z", "shell.execute_reply": "2025-06-26T11:07:07.363255Z" }, "id": "c50hsFk2MiWs" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2025-06-26 11:07:04.576431: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n", "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n", "E0000 00:00:1750936024.597332 7254 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n", "E0000 00:00:1750936024.603809 7254 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n", "W0000 00:00:1750936024.620507 7254 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n", "W0000 00:00:1750936024.620529 7254 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n", "W0000 00:00:1750936024.620531 7254 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n", "W0000 00:00:1750936024.620534 7254 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n" ] } ], "source": [ "import tensorflow as tf" ] }, { "cell_type": "markdown", "metadata": { "id": "56XvRdPy-ewT" }, "source": [ "TensorFlow 2.x includes significant changes to the `tf.summary` API used to write summary data for visualization in TensorBoard." ] }, { "cell_type": "markdown", "metadata": { "id": "V_JOBTVzU5Cx" }, "source": [ "## What's changed\n", "\n", "It's useful to think of the `tf.summary` API as two sub-APIs:\n", "\n", "- A set of ops for recording individual summaries - `summary.scalar()`, `summary.histogram()`, `summary.image()`, `summary.audio()`, and `summary.text()` - which are called inline from your model code.\n", "- Writing logic that collects these individual summaries and writes them to a specially formatted log file (which TensorBoard then reads to generate visualizations)." ] }, { "cell_type": "markdown", "metadata": { "id": "9-rVv-EYU8_E" }, "source": [ "### In TF 1.x\n", "\n", "The two halves had to be manually wired together - by fetching the summary op outputs via `Session.run()` and calling `FileWriter.add_summary(output, step)`. The `v1.summary.merge_all()` op made this easier by using a graph collection to aggregate all summary op outputs, but this approach still worked poorly for eager execution and control flow, making it especially ill-suited for TF 2.x." ] }, { "cell_type": "markdown", "metadata": { "id": "rh8R2g5FWbsQ" }, "source": [ "### In TF 2.X\n", "\n", "The two halves are tightly integrated, and now individual `tf.summary` ops write their data immediately when executed. Using the API from your model code should still look familiar, but it's now friendly to eager execution while remaining graph-mode compatible. Integrating both halves of the API means the `summary.FileWriter` is now part of the TensorFlow execution context and gets accessed directly by `tf.summary` ops, so configuring writers is the main part that looks different." ] }, { "cell_type": "markdown", "metadata": { "id": "em7GQju5VA0I" }, "source": [ "Example usage with eager execution, the default in TF 2.x:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:07.368150Z", "iopub.status.busy": "2025-06-26T11:07:07.367723Z", "iopub.status.idle": "2025-06-26T11:07:10.012973Z", "shell.execute_reply": "2025-06-26T11:07:10.012273Z" }, "id": "GgFXOtSeVFqP" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "I0000 00:00:1750936029.169573 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13680 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5\n", "I0000 00:00:1750936029.171874 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13756 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:06.0, compute capability: 7.5\n", "I0000 00:00:1750936029.174126 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 13756 MB memory: -> device: 2, name: Tesla T4, pci bus id: 0000:00:07.0, compute capability: 7.5\n", "I0000 00:00:1750936029.176311 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 13756 MB memory: -> device: 3, name: Tesla T4, pci bus id: 0000:00:08.0, compute capability: 7.5\n" ] } ], "source": [ "writer = tf.summary.create_file_writer(\"/tmp/mylogs/eager\")\n", "\n", "with writer.as_default():\n", " for step in range(100):\n", " # other model code would go here\n", " tf.summary.scalar(\"my_metric\", 0.5, step=step)\n", " writer.flush()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.015902Z", "iopub.status.busy": "2025-06-26T11:07:10.015636Z", "iopub.status.idle": "2025-06-26T11:07:10.164274Z", "shell.execute_reply": "2025-06-26T11:07:10.163486Z" }, "id": "h5fk_NG7QKve" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "events.out.tfevents.1750936029.kokoro-gcp-ubuntu-prod-1050903991.7254.0.v2\r\n" ] } ], "source": [ "ls /tmp/mylogs/eager" ] }, { "cell_type": "markdown", "metadata": { "id": "FvBBeFxZVLzW" }, "source": [ "Example usage with tf.function graph execution:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.167448Z", "iopub.status.busy": "2025-06-26T11:07:10.167179Z", "iopub.status.idle": "2025-06-26T11:07:10.334568Z", "shell.execute_reply": "2025-06-26T11:07:10.333937Z" }, "id": "kovK0LEEVKjR" }, "outputs": [], "source": [ "writer = tf.summary.create_file_writer(\"/tmp/mylogs/tf_function\")\n", "\n", "@tf.function\n", "def my_func(step):\n", " with writer.as_default():\n", " # other model code would go here\n", " tf.summary.scalar(\"my_metric\", 0.5, step=step)\n", "\n", "for step in tf.range(100, dtype=tf.int64):\n", " my_func(step)\n", " writer.flush()" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.337742Z", "iopub.status.busy": "2025-06-26T11:07:10.337329Z", "iopub.status.idle": "2025-06-26T11:07:10.481793Z", "shell.execute_reply": "2025-06-26T11:07:10.480996Z" }, "id": "Qw5nHhRUSM7_" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "events.out.tfevents.1750936030.kokoro-gcp-ubuntu-prod-1050903991.7254.1.v2\r\n" ] } ], "source": [ "ls /tmp/mylogs/tf_function" ] }, { "cell_type": "markdown", "metadata": { "id": "5SY6eYitUJH_" }, "source": [ "Example usage with legacy TF 1.x graph execution:\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.485089Z", "iopub.status.busy": "2025-06-26T11:07:10.484836Z", "iopub.status.idle": "2025-06-26T11:07:10.809024Z", "shell.execute_reply": "2025-06-26T11:07:10.808270Z" }, "id": "OyQgeqZhVRNB" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "I0000 00:00:1750936030.700600 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13680 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:05.0, compute capability: 7.5\n", "I0000 00:00:1750936030.702411 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 13756 MB memory: -> device: 1, name: Tesla T4, pci bus id: 0000:00:06.0, compute capability: 7.5\n", "I0000 00:00:1750936030.704267 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:2 with 13756 MB memory: -> device: 2, name: Tesla T4, pci bus id: 0000:00:07.0, compute capability: 7.5\n", "I0000 00:00:1750936030.706041 7254 gpu_device.cc:2019] Created device /job:localhost/replica:0/task:0/device:GPU:3 with 13756 MB memory: -> device: 3, name: Tesla T4, pci bus id: 0000:00:08.0, compute capability: 7.5\n", "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n", "I0000 00:00:1750936030.709912 7254 mlir_graph_optimization_pass.cc:425] MLIR V1 optimization pass is not enabled\n" ] } ], "source": [ "g = tf.compat.v1.Graph()\n", "with g.as_default():\n", " step = tf.Variable(0, dtype=tf.int64)\n", " step_update = step.assign_add(1)\n", " writer = tf.summary.create_file_writer(\"/tmp/mylogs/session\")\n", " with writer.as_default():\n", " tf.summary.scalar(\"my_metric\", 0.5, step=step)\n", " all_summary_ops = tf.compat.v1.summary.all_v2_summary_ops()\n", " writer_flush = writer.flush()\n", "\n", "\n", "with tf.compat.v1.Session(graph=g) as sess:\n", " sess.run([writer.init(), step.initializer])\n", "\n", " for i in range(100):\n", " sess.run(all_summary_ops)\n", " sess.run(step_update)\n", " sess.run(writer_flush) " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.812094Z", "iopub.status.busy": "2025-06-26T11:07:10.811817Z", "iopub.status.idle": "2025-06-26T11:07:10.960550Z", "shell.execute_reply": "2025-06-26T11:07:10.959748Z" }, "id": "iqKOyawnNQSH" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "events.out.tfevents.1750936030.kokoro-gcp-ubuntu-prod-1050903991.7254.2.v2\r\n" ] } ], "source": [ "ls /tmp/mylogs/session" ] }, { "cell_type": "markdown", "metadata": { "id": "xEJIh4btVVRb" }, "source": [ "## Converting your code\n", "\n", "Converting existing `tf.summary` usage to the TF 2.x API cannot be reliably automated, so the [`tf_upgrade_v2` script](https://www.tensorflow.org/guide/upgrade) just rewrites it all to `tf.compat.v1.summary` and will not enable the TF 2.x behaviors automatically." ] }, { "cell_type": "markdown", "metadata": { "id": "1972f8ff0073" }, "source": [ "### Partial Migration\n", "\n", "To make migration to TF 2.x easier for users of model code that still depends heavily on the TF 1.x summary API logging ops like `tf.compat.v1.summary.scalar()`, it is possible to migrate only the writer APIs first, allowing for individual TF 1.x summary ops inside your model code to be fully migrated at a later point.\n", "\n", "To support this style of migration, tf.compat.v1.summary will automatically forward to their TF 2.x equivalents under the following conditions:\n", "\n", " - The outermost context is eager mode\n", " - A default TF 2.x summary writer has been set\n", " - A non-empty value for step has been set for the writer (using tf.summary.SummaryWriter.as_default, tf.summary.experimental.set_step, or alternatively tf.compat.v1.train.create_global_step)\n", "\n", "Note that when TF 2.x summary implementation is invoked, the return value will be an empty bytestring tensor, to avoid duplicate summary writing. Additionally, the input argument forwarding is best-effort and not all arguments will be preserved (for instance `family` argument will be supported whereas `collections` will be removed).\n", "\n", "Example to invoke tf.summary.scalar behaviors in tf.compat.v1.summary.scalar:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2025-06-26T11:07:10.963657Z", "iopub.status.busy": "2025-06-26T11:07:10.963389Z", "iopub.status.idle": "2025-06-26T11:07:10.972672Z", "shell.execute_reply": "2025-06-26T11:07:10.972056Z" }, "id": "6457297c0b9d" }, "outputs": [], "source": [ "# Enable eager execution.\n", "tf.compat.v1.enable_v2_behavior()\n", "\n", "# A default TF 2.x summary writer is available.\n", "writer = tf.summary.create_file_writer(\"/tmp/mylogs/enable_v2_in_v1\")\n", "# A step is set for the writer.\n", "with writer.as_default(step=0):\n", " # Below invokes `tf.summary.scalar`, and the return value is an empty bytestring.\n", " tf.compat.v1.summary.scalar('float', tf.constant(1.0), family=\"family\")" ] }, { "cell_type": "markdown", "metadata": { "id": "Pq4Fy1bSUdrZ" }, "source": [ "### Full Migration\n", "\n", "To fully migrate to TF 2.x, you'll need to adapt your code as follows:\n", "\n", "1. A default writer set via `.as_default()` must be present to use summary ops\n", "\n", " - This means executing ops eagerly or using ops in graph construction\n", " - Without a default writer, summary ops become silent no-ops\n", " - Default writers do not (yet) propagate across the `@tf.function` execution boundary - they are only detected when the function is traced - so best practice is to call `writer.as_default()` within the function body, and to ensure that the writer object continues to exist as long as the `@tf.function` is being used\n", "\n", "1. The \"step\" value must be passed into each op via a the `step` argument\n", "\n", " - TensorBoard requires a step value to render the data as a time series\n", " - Explicit passing is necessary because the global step from TF 1.x has been removed, so each op must know the desired step variable to read\n", " - To reduce boilerplate, experimental support for registering a default step value is available as `tf.summary.experimental.set_step()`, but this is provisional functionality that may be changed without notice\n", "\n", "1. Function signatures of individual summary ops have changed\n", "\n", " - Return value is now a boolean (indicating if a summary was actually written)\n", " - The second parameter name (if used) has changed from `tensor` to `data`\n", " - The `collections` parameter has been removed; collections are TF 1.x only\n", " - The `family` parameter has been removed; just use `tf.name_scope()`\n", "\n", "1. [Only for legacy graph mode / session execution users]\n", " - First initialize the writer with `v1.Session.run(writer.init())`\n", "\n", " - Use `v1.summary.all_v2_summary_ops()` to get all TF 2.x summary ops for the current graph, e.g. to execute them via `Session.run()`\n", " - Flush the writer with `v1.Session.run(writer.flush())` and likewise for `close()`\n", "\n", "If your TF 1.x code was instead using `tf.contrib.summary` API, it's much more similar to the TF 2.x API, so `tf_upgrade_v2` script will automate most of the migration steps (and emit warnings or errors for any usage that cannot be fully migrated). For the most part it just rewrites the API calls to `tf.compat.v2.summary`; if you only need compatibility with TF 2.x you can drop the `compat.v2` and just reference it as `tf.summary`." ] }, { "cell_type": "markdown", "metadata": { "id": "1GUZRWSkW3ZC" }, "source": [ "## Additional tips\n", "\n", "In addition to the critical areas above, some auxiliary aspects have also changed:\n", "\n", "* Conditional recording (like \"log every 100 steps\") has a new look\n", "\n", " - To control ops and associated code, wrap them in a regular if statement (which works in eager mode and in [`@tf.function` via autograph](https://www.tensorflow.org/alpha/guide/autograph)) or a `tf.cond`\n", " - To control just \tsummaries, use the new `tf.summary.record_if()` context manager, and pass it the boolean condition of your choosing\n", " - These replace the TF 1.x pattern:\n", " ```\n", " if condition:\n", " writer.add_summary()\n", " ```\n" ] }, { "cell_type": "markdown", "metadata": { "id": "9VMYrKn4Uh52" }, "source": [ "* No direct writing of `tf.compat.v1.Graph` - instead use trace functions\n", "\n", " - Graph execution in TF 2.x uses `@tf.function` instead of the explicit Graph\n", " - In TF 2.x, use the new tracing-style APIs `tf.summary.trace_on()` and `tf.summary.trace_export()` to record executed function graphs\n" ] }, { "cell_type": "markdown", "metadata": { "id": "UGItA6U0UkDx" }, "source": [ "* No more global writer caching per logdir with `tf.summary.FileWriterCache`\n", "\n", " - Users should either implement their own caching/sharing of writer objects, or just use separate writers (TensorBoard support for the latter is [in progress](https://github.com/tensorflow/tensorboard/issues/1063))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "d7BQJVcsUnMp" }, "source": [ "* The event file binary representation has changed\n", "\n", " - TensorBoard 1.x already supports the new format; this difference only affects users who are manually parsing summary data from event files\n", " - Summary data is now stored as tensor bytes; you can use `tf.make_ndarray(event.summary.value[0].tensor)` to convert it to numpy" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "migrate.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.23" } }, "nbformat": 4, "nbformat_minor": 0 }