{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Tce3stUlHN0L" }, "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2022-12-14T22:35:46.091488Z", "iopub.status.busy": "2022-12-14T22:35:46.090871Z", "iopub.status.idle": "2022-12-14T22:35:46.094708Z", "shell.execute_reply": "2022-12-14T22:35:46.094148Z" }, "id": "tuOe1ymfHZPu" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "qFdPvlXBOdUN" }, "source": [ "# 生成随机数" ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看在 Google Colab 中运行 在 GitHub 上查看源代码下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "BlGY1iiph_C2" }, "source": [ "TensorFlow 在 `tf.random` 模块中提供了一组伪随机数生成器 (RNG)。本文介绍如何控制随机数生成器,以及这些生成器如何与其他 Tensorflow 子系统交互。\n", "\n", "注:不保证随机数在不同 TensorFlow 版本间一致。请参阅:[版本兼容性](https://tensorflow.google.cn/guide/versions#what_is_not_covered)\n", "\n", "TensorFlow provides two approaches for controlling the random number generation process:\n", "\n", "1. 通过明确使用 `tf.random.Generator` 对象。每个此类对象都会在 `tf.Variable` 中维护一个状态,该状态在每次生成随机数后都会发生改变。\n", "\n", "2. 通过使用纯函数式无状态随机函数,如 `tf.random.stateless_uniform`。在同一设备上调用具有相同参数(包括种子)的这些函数会产生相同的结果。\n", "\n", "警告:目前尚未弃用 TF 1.x 中的旧版 RNG(如 `tf.random.uniform` 和 `tf.random.normal`),但强烈建议不要使用。" ] }, { "cell_type": "markdown", "metadata": { "id": "zIGh9faCOp6x" }, "source": [ "## 设置" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:46.098184Z", "iopub.status.busy": "2022-12-14T22:35:46.097764Z", "iopub.status.idle": "2022-12-14T22:35:48.131013Z", "shell.execute_reply": "2022-12-14T22:35:48.130228Z" }, "id": "ECDrttf0s8Nu" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-12-14 22:35:47.032543: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\n", "2022-12-14 22:35:47.032638: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\n", "2022-12-14 22:35:47.032649: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "# Creates some virtual devices (cpu:0, cpu:1, etc.) for using distribution strategy\n", "physical_devices = tf.config.list_physical_devices(\"CPU\")\n", "tf.config.experimental.set_virtual_device_configuration(\n", " physical_devices[0], [\n", " tf.config.experimental.VirtualDeviceConfiguration(),\n", " tf.config.experimental.VirtualDeviceConfiguration(),\n", " tf.config.experimental.VirtualDeviceConfiguration()\n", " ])" ] }, { "cell_type": "markdown", "metadata": { "id": "eqMlrUsVu2Ai" }, "source": [ "## `tf.random.Generator` 类\n", "\n", "当您希望每次调用 RNG 都产生不同的结果时,可以使用 `tf.random.Generator` 类。它会维护一个内部状态(由 `tf.Variable` 对象管理),该状态在每次生成随机数时都会更新。由于该状态由 `tf.Variable` 管理,因此,它可以利用 `tf.Variable` 提供的所有功能,如简单的检查点、自动控制依赖项和线程安全性。\n", "\n", "通过手动创建 `tf.random.Generator`类的一个对象,您可以获得该生成器,或者通过调用 `tf.random.get_global_generator()`,您可以获得默认全局生成器:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:48.135532Z", "iopub.status.busy": "2022-12-14T22:35:48.134710Z", "iopub.status.idle": "2022-12-14T22:35:51.391277Z", "shell.execute_reply": "2022-12-14T22:35:51.390520Z" }, "id": "7yU1E3JvxOQD" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[ 0.43842277 -0.53439844 -0.07710262]\n", " [ 1.5658046 -0.1012345 -0.2744976 ]], shape=(2, 3), dtype=float32)\n", "tf.Tensor(\n", "[[-0.69198644 1.0939602 0.46467507]\n", " [ 0.72095203 0.6924698 -0.5659851 ]], shape=(2, 3), dtype=float32)\n" ] } ], "source": [ "g1 = tf.random.Generator.from_seed(1)\n", "print(g1.normal(shape=[2, 3]))\n", "g2 = tf.random.get_global_generator()\n", "print(g2.normal(shape=[2, 3]))" ] }, { "cell_type": "markdown", "metadata": { "id": "QmRCeAvTxulW" }, "source": [ "有多种方法可以创建生成器对象。最简单的方法是使用 `Generator.from_seed`(代码如上),从种子创建生成器。种子可以是任何非负整数,`from_seed` 还有一个可选参数 `alg`,这是该生成器将使用的 RNG 算法。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.394929Z", "iopub.status.busy": "2022-12-14T22:35:51.394420Z", "iopub.status.idle": "2022-12-14T22:35:51.402577Z", "shell.execute_reply": "2022-12-14T22:35:51.401976Z" }, "id": "kISbOE4Xfjhv" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[ 0.43842277 -0.53439844 -0.07710262]\n", " [ 1.5658046 -0.1012345 -0.2744976 ]], shape=(2, 3), dtype=float32)\n" ] } ], "source": [ "g1 = tf.random.Generator.from_seed(1, alg='philox')\n", "print(g1.normal(shape=[2, 3]))" ] }, { "cell_type": "markdown", "metadata": { "id": "_mCRaN7dfd8j" }, "source": [ "有关详细信息,请参阅后文中的*算法*部分。\n", "\n", "创建生成器的另一种方法是使用 `Generator.from_non_deterministic_state`。以这种方式创建的生成器首先会处于非确定状态,具体取决于时间和操作系统等因素。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.405860Z", "iopub.status.busy": "2022-12-14T22:35:51.405395Z", "iopub.status.idle": "2022-12-14T22:35:51.412586Z", "shell.execute_reply": "2022-12-14T22:35:51.411983Z" }, "id": "gxPLCLsz00qY" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[-0.89688414 -1.27604 0.69840294]\n", " [-0.5044483 -0.09191426 -0.11111203]], shape=(2, 3), dtype=float32)\n" ] } ], "source": [ "g = tf.random.Generator.from_non_deterministic_state()\n", "print(g.normal(shape=[2, 3]))" ] }, { "cell_type": "markdown", "metadata": { "id": "zSAp2BMj1JZ6" }, "source": [ "还有其他方法可以创建生成器,比如说通过显式状态创建,本指南不作赘述。\n", "\n", "当使用 `tf.random.get_global_generator` 来获取全局生成器时,需要注意设备放置。第一次调用 `tf.random.get_global_generator` 时就会创建全局生成器(从非确定状态),并将其放置在该调用的作用域内的默认设备上。举个例子,如果第一次调用 `tf.random.get_global_generator` 的位置在 `tf.device(\"gpu\")` 作用域内,则会将全局生成器放置在 GPU 上,如果稍后要从 CPU 使用全局生成器,则会将其从 GPU 复制到 CPU。\n", "\n", "There is also a function `tf.random.set_global_generator` for replacing the global generator with another generator object. This function should be used with caution though, because the old global generator may have been captured by a `tf.function` (as a weak reference), and replacing it will cause it to be garbage collected, breaking the `tf.function`. A better way to reset the global generator is to use one of the \"reset\" functions such as `Generator.reset_from_seed`, which won't create new generator objects." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.416028Z", "iopub.status.busy": "2022-12-14T22:35:51.415502Z", "iopub.status.idle": "2022-12-14T22:35:51.433753Z", "shell.execute_reply": "2022-12-14T22:35:51.433163Z" }, "id": "324S5bpd9HRg" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842277, shape=(), dtype=float32)\n", "tf.Tensor(1.6272374, shape=(), dtype=float32)\n", "tf.Tensor(0.43842277, shape=(), dtype=float32)\n" ] } ], "source": [ "g = tf.random.Generator.from_seed(1)\n", "print(g.normal([]))\n", "print(g.normal([]))\n", "g.reset_from_seed(1)\n", "print(g.normal([]))" ] }, { "cell_type": "markdown", "metadata": { "id": "z9H0wuvp9VwH" }, "source": [ "### 创建独立的随机数流\n", "\n", "许多应用都需要多个独立的随机数流,所谓独立,就是指不能相互重叠,也不能有统计学上可检测到的相关性。通过使用 `Generator.split` 创建多个一定相互独立的生成器即可实现此目的(即生成独立流)。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.437034Z", "iopub.status.busy": "2022-12-14T22:35:51.436574Z", "iopub.status.idle": "2022-12-14T22:35:51.459932Z", "shell.execute_reply": "2022-12-14T22:35:51.459230Z" }, "id": "Vg5_KN18OZjo" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842277, shape=(), dtype=float32)\n", "tf.Tensor(2.536413, shape=(), dtype=float32)\n", "tf.Tensor(0.33186463, shape=(), dtype=float32)\n", "tf.Tensor(-0.07144657, shape=(), dtype=float32)\n", "tf.Tensor(-0.79253083, shape=(), dtype=float32)\n" ] } ], "source": [ "g = tf.random.Generator.from_seed(1)\n", "print(g.normal([]))\n", "new_gs = g.split(3)\n", "for new_g in new_gs:\n", " print(new_g.normal([]))\n", "print(g.normal([]))" ] }, { "cell_type": "markdown", "metadata": { "id": "dqOaGVzKOsRJ" }, "source": [ "与 `normal` 之类的 RNG 方法类似,`split` 会改变调用它的生成器的状态(上例中为 `g`)。除相互之间保持独立外,新生成器 (`new_gs`) 还一定独立于旧生成器 (`g`)。\n", "\n", "当您想要确保使用的生成器位于与其他计算相同的设备上,从而避免跨设备复制的开销时,生成新生成器也很有用。例如: " ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.463388Z", "iopub.status.busy": "2022-12-14T22:35:51.462816Z", "iopub.status.idle": "2022-12-14T22:35:51.483953Z", "shell.execute_reply": "2022-12-14T22:35:51.483392Z" }, "id": "5jSnJBlUQzF3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(-0.9215919, shape=(), dtype=float32)\n" ] } ], "source": [ "with tf.device(\"cpu\"): # change \"cpu\" to the device you want\n", " g = tf.random.get_global_generator().split(1)[0] \n", " print(g.normal([])) # use of g won't cause cross-device copy, unlike the global generator" ] }, { "cell_type": "markdown", "metadata": { "id": "sCxbccYMRdd4" }, "source": [ "注:在理论上,此处可以使用 `from_seed`(而不是 `split`)之类的构造函数获取新生成器,但这样做无法保证新生成器与全局生成器相互独立。同时也有使用同一种子或导致产生重叠随机数流的种子意外创建两个生成器的风险。\n", "\n", "您可以在拆分生成器上调用 `split`,执行递归拆分。递归深度没有限制(除非发生整数溢出)。" ] }, { "cell_type": "markdown", "metadata": { "id": "8JUgnQM_O0lg" }, "source": [ "### 与 `tf.function` 交互\n", "\n", "与 `tf.function` 一起使用时,`tf.random.Generator` 遵循与 `tf.Variable` 相同的原则。这包括三个方面:" ] }, { "cell_type": "markdown", "metadata": { "id": "jnSjhY6WM-J8" }, "source": [ "#### 在 `tf.function` 的外部创建生成器\n", "\n", "`tf.function` 可以使用在其外部创建的生成器。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.487316Z", "iopub.status.busy": "2022-12-14T22:35:51.486815Z", "iopub.status.idle": "2022-12-14T22:35:51.541617Z", "shell.execute_reply": "2022-12-14T22:35:51.540976Z" }, "id": "a5EEy0E2UHMw" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842277, shape=(), dtype=float32)\n" ] } ], "source": [ "g = tf.random.Generator.from_seed(1)\n", "@tf.function\n", "def foo():\n", " return g.normal([])\n", "print(foo())" ] }, { "cell_type": "markdown", "metadata": { "id": "L_8kC7kbO5uu" }, "source": [ "调用该函数时,用户需要确保生成器对象仍处于活动状态(没有被回收)。" ] }, { "cell_type": "markdown", "metadata": { "id": "PwIrBv_zUYwI" }, "source": [ "#### 在 `tf.function` 的内部创建生成器\n", "\n", "只有 `tf.function` 第一次运行时,才可以在其内部创建生成器。 " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.545139Z", "iopub.status.busy": "2022-12-14T22:35:51.544636Z", "iopub.status.idle": "2022-12-14T22:35:51.654748Z", "shell.execute_reply": "2022-12-14T22:35:51.654096Z" }, "id": "3JzpUvqJU4MW" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842277, shape=(), dtype=float32)\n", "tf.Tensor(1.6272374, shape=(), dtype=float32)\n" ] } ], "source": [ "g = None\n", "@tf.function\n", "def foo():\n", " global g\n", " if g is None:\n", " g = tf.random.Generator.from_seed(1)\n", " return g.normal([])\n", "print(foo())\n", "print(foo())" ] }, { "cell_type": "markdown", "metadata": { "id": "UaTVnOhHVM9a" }, "source": [ "#### 将生成器作为参数传递给 `tf.function`\n", "\n", "当用作 `tf.function` 的参数时,不同的生成器对象将导致 `tf.function` 的回溯。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.657997Z", "iopub.status.busy": "2022-12-14T22:35:51.657501Z", "iopub.status.idle": "2022-12-14T22:35:51.733402Z", "shell.execute_reply": "2022-12-14T22:35:51.732826Z" }, "id": "DeR9kvt0V-ad" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2\n" ] } ], "source": [ "num_traces = 0\n", "@tf.function\n", "def foo(g):\n", " global num_traces\n", " num_traces += 1\n", " return g.normal([])\n", "foo(tf.random.Generator.from_seed(1))\n", "foo(tf.random.Generator.from_seed(2))\n", "print(num_traces)" ] }, { "cell_type": "markdown", "metadata": { "id": "E0RxllJzkGfo" }, "source": [ "请注意,此回溯行为与 `tf.Variable` 一致:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.736830Z", "iopub.status.busy": "2022-12-14T22:35:51.736369Z", "iopub.status.idle": "2022-12-14T22:35:51.772649Z", "shell.execute_reply": "2022-12-14T22:35:51.772041Z" }, "id": "oWD2f_qxkSe7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1\n" ] } ], "source": [ "num_traces = 0\n", "@tf.function\n", "def foo(v):\n", " global num_traces\n", " num_traces += 1\n", " return v.read_value()\n", "foo(tf.Variable(1))\n", "foo(tf.Variable(2))\n", "print(num_traces)" ] }, { "cell_type": "markdown", "metadata": { "id": "fxcS6IY8WZuh" }, "source": [ "### 与分布策略交互\n", "\n", "`Generator` 与分布策略有两种交互方式。" ] }, { "cell_type": "markdown", "metadata": { "id": "GyZv9QJkZfkQ" }, "source": [ "#### 在分布策略的外部创建生成器\n", "\n", "如果是在策略作用域的外部创建的生成器,则会序列化访问此生成器的所有副本,因此,每一个副本都会得到不同的随机数。" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.776162Z", "iopub.status.busy": "2022-12-14T22:35:51.775517Z", "iopub.status.idle": "2022-12-14T22:35:51.805349Z", "shell.execute_reply": "2022-12-14T22:35:51.804739Z" }, "id": "HX_beT9SZWMp" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842274, shape=(), dtype=float32)\n", "tf.Tensor(1.6272374, shape=(), dtype=float32)\n" ] } ], "source": [ "g = tf.random.Generator.from_seed(1)\n", "strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat.scope():\n", " def f():\n", " print(g.normal([]))\n", " results = strat.run(f)" ] }, { "cell_type": "markdown", "metadata": { "id": "ydYQbUqLPAgH" }, "source": [ "请注意,这种使用方法可能产生性能问题,因为生成器的设备与副本不同。" ] }, { "cell_type": "markdown", "metadata": { "id": "Yal4LbBKbAeN" }, "source": [ "#### 在分布策略的内部创建生成器\n", "\n", "如果在策略作用域内创建生成器,则每个副本将获得不同且独立的随机数流。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.808720Z", "iopub.status.busy": "2022-12-14T22:35:51.808118Z", "iopub.status.idle": "2022-12-14T22:35:51.838154Z", "shell.execute_reply": "2022-12-14T22:35:51.837552Z" }, "id": "5SeUu7IFmTyQ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-0.87930447, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.020661574, shape=(), dtype=float32)\n", "}\n", "WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-1.5822568, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.77539235, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat.scope():\n", " g = tf.random.Generator.from_seed(1)\n", " print(strat.run(lambda: g.normal([])))\n", " print(strat.run(lambda: g.normal([])))" ] }, { "cell_type": "markdown", "metadata": { "id": "PFBlrOudfu9u" }, "source": [ "注:目前 `tf.random.Generator` 没有提供让不同副本获得相同(而非不同)流的选项(这在技术上并不难)。如果您有此功能的用例,请告知 TensorFlow 开发者。\n", "\n", "如果生成器已植入种子(例如,由 `Generator.from_seed` 创建),则随机数由种子决定,即使不同的副本获得不同且不相关的数字也是如此。可以将在副本上生成的随机数看作副本 ID 的散列和对所有副本通用的“主要”随机数。因此,整个系统仍然是确定性的。\n", "\n", "还可以在 `Strategy.run` 内创建 `tf.random.Generator`:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.841334Z", "iopub.status.busy": "2022-12-14T22:35:51.840829Z", "iopub.status.idle": "2022-12-14T22:35:51.881407Z", "shell.execute_reply": "2022-12-14T22:35:51.880781Z" }, "id": "nlQXi5Msb1Wu" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor([-0.87930447 -1.5822568 ], shape=(2,), dtype=float32),\n", " 1: tf.Tensor([0.02066157 0.77539235], shape=(2,), dtype=float32)\n", "}\n", "WARNING:tensorflow:Using MirroredStrategy eagerly has significant overhead currently. We will be working on improving this in the future, but for now please wrap `call_for_each_replica` or `experimental_run` or `run` inside a tf.function to get the best performance.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor([-0.87930447 -1.5822568 ], shape=(2,), dtype=float32),\n", " 1: tf.Tensor([0.02066157 0.77539235], shape=(2,), dtype=float32)\n", "}\n" ] } ], "source": [ "strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat.scope():\n", " def f():\n", " g = tf.random.Generator.from_seed(1)\n", " a = g.normal([])\n", " b = g.normal([])\n", " return tf.stack([a, b])\n", " print(strat.run(f))\n", " print(strat.run(f))" ] }, { "cell_type": "markdown", "metadata": { "id": "4Sv-aiaOmrOr" }, "source": [ "我们不再建议将 `tf.random.Generator` 作为参数传递给 `Strategy.run`,因为 `Strategy.run` 通常要求参数是张量,而不是生成器。" ] }, { "cell_type": "markdown", "metadata": { "id": "8RbM4vabtiWM" }, "source": [ "### 保存生成器\n", "\n", "通常,为了保存或序列化,您可以按照处理 `tf.Variable` 或 `tf.Module`(或其子类)的方式来处理 `tf.random.Generator`。在 TF 中有两种序列化机制:[检查点](https://tensorflow.google.cn/guide/checkpoint)和 [SavedModel](https://tensorflow.google.cn/guide/saved_model)。" ] }, { "cell_type": "markdown", "metadata": { "id": "PDtySQDotWQc" }, "source": [ "#### 检查点\n", "\n", "可以使用 `tf.train.Checkpoint` 自由保存和恢复生成器。来自恢复点的随机数流将与来自保存点的随机数流相同。 " ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.884667Z", "iopub.status.busy": "2022-12-14T22:35:51.884132Z", "iopub.status.idle": "2022-12-14T22:35:51.891969Z", "shell.execute_reply": "2022-12-14T22:35:51.891392Z" }, "id": "uB_bDSbzpbne" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(0.43842277, shape=(), dtype=float32)\n" ] } ], "source": [ "filename = \"./checkpoint\"\n", "g = tf.random.Generator.from_seed(1)\n", "cp = tf.train.Checkpoint(generator=g)\n", "print(g.normal([]))" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.894834Z", "iopub.status.busy": "2022-12-14T22:35:51.894302Z", "iopub.status.idle": "2022-12-14T22:35:51.915444Z", "shell.execute_reply": "2022-12-14T22:35:51.914868Z" }, "id": "bKKtRWeIkIjX" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from saving point:\n", "tf.Tensor(1.6272374, shape=(), dtype=float32)\n", "tf.Tensor(1.6307176, shape=(), dtype=float32)\n" ] } ], "source": [ "cp.write(filename)\n", "print(\"RNG stream from saving point:\")\n", "print(g.normal([]))\n", "print(g.normal([]))" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.918329Z", "iopub.status.busy": "2022-12-14T22:35:51.917767Z", "iopub.status.idle": "2022-12-14T22:35:51.930858Z", "shell.execute_reply": "2022-12-14T22:35:51.930220Z" }, "id": "-cIHcHwRkQp3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from restoring point:\n", "tf.Tensor(1.6272374, shape=(), dtype=float32)\n", "tf.Tensor(1.6307176, shape=(), dtype=float32)\n" ] } ], "source": [ "cp.restore(filename)\n", "print(\"RNG stream from restoring point:\")\n", "print(g.normal([]))\n", "print(g.normal([]))" ] }, { "cell_type": "markdown", "metadata": { "id": "A-OeUUQEJ37X" }, "source": [ "您还可以在分发策略中保存和恢复:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.934104Z", "iopub.status.busy": "2022-12-14T22:35:51.933649Z", "iopub.status.idle": "2022-12-14T22:35:51.947849Z", "shell.execute_reply": "2022-12-14T22:35:51.947216Z" }, "id": "3aI6TQ2lq28w" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-0.87930447, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.020661574, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "filename = \"./checkpoint\"\n", "strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat.scope():\n", " g = tf.random.Generator.from_seed(1)\n", " cp = tf.train.Checkpoint(my_generator=g)\n", " print(strat.run(lambda: g.normal([])))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.951053Z", "iopub.status.busy": "2022-12-14T22:35:51.950450Z", "iopub.status.idle": "2022-12-14T22:35:51.968308Z", "shell.execute_reply": "2022-12-14T22:35:51.967681Z" }, "id": "kTZcdaMwkvJI" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from saving point:\n", "PerReplica:{\n", " 0: tf.Tensor(-1.5822568, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.77539235, shape=(), dtype=float32)\n", "}\n", "PerReplica:{\n", " 0: tf.Tensor(-0.5039703, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.1251838, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "with strat.scope():\n", " cp.write(filename)\n", " print(\"RNG stream from saving point:\")\n", " print(strat.run(lambda: g.normal([])))\n", " print(strat.run(lambda: g.normal([])))" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.971414Z", "iopub.status.busy": "2022-12-14T22:35:51.970770Z", "iopub.status.idle": "2022-12-14T22:35:51.991510Z", "shell.execute_reply": "2022-12-14T22:35:51.990854Z" }, "id": "nizFA5IrkzN1" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from restoring point:\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-1.5822568, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.77539235, shape=(), dtype=float32)\n", "}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-0.5039703, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.1251838, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "with strat.scope():\n", " cp.restore(filename)\n", " print(\"RNG stream from restoring point:\")\n", " print(strat.run(lambda: g.normal([])))\n", " print(strat.run(lambda: g.normal([])))" ] }, { "cell_type": "markdown", "metadata": { "id": "Z2rsPfp9J6JA" }, "source": [ "在保存之前,应确保副本在其 RNG 调用历史记录中不会出现差异(例如,一个副本发出一个 RNG 调用,而另一个副本发出两个 RNG 调用)。否则,它们的内部 RNG 状态将会不同,`tf.train.Checkpoint`(仅保存第一个副本的状态)将无法正确恢复所有副本。\n", "\n", "您还可以使用不同数量的副本将保存的检查点恢复到不同的分发策略。由于在同一策略中创建的 `tf.random.Generator` 对象只能在同一策略中使用,因此要恢复到不同的策略,需要在目标策略中新建一个 `tf.random.Generator`,并为其创建一个新的 `tf.train.Checkpoint`,如下例所示:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:51.994682Z", "iopub.status.busy": "2022-12-14T22:35:51.994072Z", "iopub.status.idle": "2022-12-14T22:35:52.008240Z", "shell.execute_reply": "2022-12-14T22:35:52.007604Z" }, "id": "zgoFRf59-IvW" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-0.87930447, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.020661574, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "filename = \"./checkpoint\"\n", "strat1 = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat1.scope():\n", " g1 = tf.random.Generator.from_seed(1)\n", " cp1 = tf.train.Checkpoint(my_generator=g1)\n", " print(strat1.run(lambda: g1.normal([])))" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.011158Z", "iopub.status.busy": "2022-12-14T22:35:52.010602Z", "iopub.status.idle": "2022-12-14T22:35:52.027635Z", "shell.execute_reply": "2022-12-14T22:35:52.026987Z" }, "id": "Lu79ETxMlDpO" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from saving point:\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-1.5822568, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.77539235, shape=(), dtype=float32)\n", "}\n", "PerReplica:{\n", " 0: tf.Tensor(-0.5039703, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.1251838, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "with strat1.scope():\n", " cp1.write(filename)\n", " print(\"RNG stream from saving point:\")\n", " print(strat1.run(lambda: g1.normal([])))\n", " print(strat1.run(lambda: g1.normal([])))" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.030647Z", "iopub.status.busy": "2022-12-14T22:35:52.030148Z", "iopub.status.idle": "2022-12-14T22:35:52.079426Z", "shell.execute_reply": "2022-12-14T22:35:52.078821Z" }, "id": "VYoRFUjklKOk" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1', '/job:localhost/replica:0/task:0/device:CPU:2')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from restoring point:\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-1.5822568, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.77539235, shape=(), dtype=float32),\n", " 2: tf.Tensor(0.6851049, shape=(), dtype=float32)\n", "}\n", "PerReplica:{\n", " 0: tf.Tensor(-0.5039703, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.1251838, shape=(), dtype=float32),\n", " 2: tf.Tensor(-0.58519536, shape=(), dtype=float32)\n", "}\n" ] } ], "source": [ "strat2 = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\", \"cpu:2\"])\n", "with strat2.scope():\n", " g2 = tf.random.Generator.from_seed(1)\n", " cp2 = tf.train.Checkpoint(my_generator=g2)\n", " cp2.restore(filename)\n", " print(\"RNG stream from restoring point:\")\n", " print(strat2.run(lambda: g2.normal([])))\n", " print(strat2.run(lambda: g2.normal([])))" ] }, { "cell_type": "markdown", "metadata": { "id": "kMltUKbANqgl" }, "source": [ "虽然 `g1` 和 `cp1` 是与 `g2` 和 `cp2` 不同的对象,但它们通过公共检查点文件 `filename` 和对象名称 `my_generator` 进行了链接。策略之间重叠的副本(例如上面的 `cpu:0` 和 `cpu:1`)将像前面的示例一样正确地恢复其 RNG 流。此保证不包括将生成器保存在策略作用域内并恢复到任何策略作用域之外的情况,反之亦然,因为策略之外的设备会被视为不同于策略中的任何副本。" ] }, { "cell_type": "markdown", "metadata": { "id": "w9dqrp1LnTaJ" }, "source": [ "#### SavedModel\n", "\n", "可以将 `tf.random.Generator` 保存到 SavedModel。可以在策略作用域内创建生成器。保存也可以在策略作用域内进行。 " ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.083261Z", "iopub.status.busy": "2022-12-14T22:35:52.082788Z", "iopub.status.idle": "2022-12-14T22:35:52.201857Z", "shell.execute_reply": "2022-12-14T22:35:52.201245Z" }, "id": "0AKO5SnUtyqx" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:CPU:0', '/job:localhost/replica:0/task:0/device:CPU:1')\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-1.4154755, shape=(), dtype=float32),\n", " 1: tf.Tensor(-0.11388441, shape=(), dtype=float32)\n", "}\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "state: tf.Tensor([256 0 0], shape=(3,), dtype=int64)\n" ] } ], "source": [ "filename = \"./saved_model\"\n", "\n", "class MyModule(tf.Module):\n", "\n", " def __init__(self):\n", " super(MyModule, self).__init__()\n", " self.g = tf.random.Generator.from_seed(0)\n", "\n", " @tf.function\n", " def __call__(self):\n", " return self.g.normal([])\n", "\n", " @tf.function\n", " def state(self):\n", " return self.g.state\n", "\n", "strat = tf.distribute.MirroredStrategy(devices=[\"cpu:0\", \"cpu:1\"])\n", "with strat.scope():\n", " m = MyModule()\n", " print(strat.run(m))\n", " print(\"state:\", m.state())" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.205275Z", "iopub.status.busy": "2022-12-14T22:35:52.204672Z", "iopub.status.idle": "2022-12-14T22:35:52.281151Z", "shell.execute_reply": "2022-12-14T22:35:52.280461Z" }, "id": "jg2148hulfLB" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: ./saved_model/assets\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from saving point:\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "PerReplica:{\n", " 0: tf.Tensor(-0.68758255, shape=(), dtype=float32),\n", " 1: tf.Tensor(0.8084062, shape=(), dtype=float32)\n", "}\n", "state: tf.Tensor([512 0 0], shape=(3,), dtype=int64)\n", "PerReplica:{\n", " 0: tf.Tensor(-0.27342677, shape=(), dtype=float32),\n", " 1: tf.Tensor(-0.53093255, shape=(), dtype=float32)\n", "}\n", "state: tf.Tensor([768 0 0], shape=(3,), dtype=int64)\n" ] } ], "source": [ "with strat.scope():\n", " tf.saved_model.save(m, filename)\n", " print(\"RNG stream from saving point:\")\n", " print(strat.run(m))\n", " print(\"state:\", m.state())\n", " print(strat.run(m))\n", " print(\"state:\", m.state())" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.284375Z", "iopub.status.busy": "2022-12-14T22:35:52.283891Z", "iopub.status.idle": "2022-12-14T22:35:52.338899Z", "shell.execute_reply": "2022-12-14T22:35:52.338268Z" }, "id": "93AgVyzOllG7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "RNG stream from loading point:\n", "state: tf.Tensor([256 0 0], shape=(3,), dtype=int64)\n", "tf.Tensor(-1.0359411, shape=(), dtype=float32)\n", "state: tf.Tensor([512 0 0], shape=(3,), dtype=int64)\n", "tf.Tensor(-0.06425078, shape=(), dtype=float32)\n", "state: tf.Tensor([768 0 0], shape=(3,), dtype=int64)\n" ] } ], "source": [ "imported = tf.saved_model.load(filename)\n", "print(\"RNG stream from loading point:\")\n", "print(\"state:\", imported.state())\n", "print(imported())\n", "print(\"state:\", imported.state())\n", "print(imported())\n", "print(\"state:\", imported.state())" ] }, { "cell_type": "markdown", "metadata": { "id": "sbb23j3pZNNq" }, "source": [ "不建议将包含 `tf.random.Generator` 的 SavedModel 加载到分发策略中,因为所有副本都将生成相同的随机数流(因为副本 ID 会在 SavedModel 的计算图中冻结)。\n", "\n", "和上面的示例一样,将分布式 `tf.random.Generator`(在分布策略中创建的生成器)加载到分布策略环境中也有一个注意事项。RNG 状态将被正确恢复,但生成的随机数将与其策略中的原始生成器不同(同样是因为策略之外的设备被视不同于策略中的任何副本)。" ] }, { "cell_type": "markdown", "metadata": { "id": "73an1POpsi6V" }, "source": [ "## 无状态 RNG\n", "\n", "无状态 RNG 的使用方法非常简单。因为它们是纯函数,不涉及状态或副作用。" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:35:52.342519Z", "iopub.status.busy": "2022-12-14T22:35:52.341977Z", "iopub.status.idle": "2022-12-14T22:35:52.349856Z", "shell.execute_reply": "2022-12-14T22:35:52.349299Z" }, "id": "0-aOOA3gasn_" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "tf.Tensor(\n", "[[ 0.5441101 0.20738031 0.07356433]\n", " [ 0.04643455 -1.30159 -0.95385665]], shape=(2, 3), dtype=float32)\n", "tf.Tensor(\n", "[[ 0.5441101 0.20738031 0.07356433]\n", " [ 0.04643455 -1.30159 -0.95385665]], shape=(2, 3), dtype=float32)\n" ] } ], "source": [ "print(tf.random.stateless_normal(shape=[2, 3], seed=[1, 2]))\n", "print(tf.random.stateless_normal(shape=[2, 3], seed=[1, 2]))" ] }, { "cell_type": "markdown", "metadata": { "id": "2O_D-RAFNH2Q" }, "source": [ "每个无状态 RNG 都需要一个 `seed` 参数,该参数必须是形状为 `[2]` 的整数张量。该运算的结果完全由种子确定。\n", "\n", "无状态 RNG 使用的 RNG 算法依赖于设备,这意味着在不同设备上运行的相同运算可能会产生不同的输出。" ] }, { "cell_type": "markdown", "metadata": { "id": "4BvGkPnaOUPF" }, "source": [ "## 算法" ] }, { "cell_type": "markdown", "metadata": { "id": "58-8kvR4pRwO" }, "source": [ "### 基本信息\n", "\n", "`tf.random.Generator` 类和 `stateless` 函数在所有设备上都支持 Philox 算法(写作 `\"philox\"` 或 `tf.random.Algorithm.PHILOX`)。\n", "\n", "如果使用相同的算法且从相同的状态开始,则不同的设备会生成相同的整数。它们还可以生成“几乎相同”的浮点数,虽然由于设备执行浮点计算的方式不同(如降阶),数值可能存在微小的差异。" ] }, { "cell_type": "markdown", "metadata": { "id": "WETA04F1OYPL" }, "source": [ "### XLA 设备\n", "\n", "在 XLA 驱动的设备(如 TPU 以及启用 XLA 时的 CPU/GPU)上,还支持 ThreeFry 算法(写作 `\"threefry\"` 或 `tf.random.Algorithm.THREEFRY`)。与 Philox 算法相比,该算法在 TPU 上执行速度较快,而在 CPU/GPU 上执行速度较慢。 " ] }, { "cell_type": "markdown", "metadata": { "id": "c04JkebCPTPu" }, "source": [ "有关这些算法的更多详细信息,请参阅论文[“Parallel Random Numbers: As Easy as 1, 2, 3”](https://www.thesalmons.org/john/random123/papers/random123sc11.pdf)。" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "random_numbers.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 0 }