{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Ma19Ks2CTDbZ" }, "source": [ "##### Copyright 2018 The TF-Agents Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2021-02-12T21:59:51.943476Z", "iopub.status.busy": "2021-02-12T21:59:51.942701Z", "iopub.status.idle": "2021-02-12T21:59:51.945525Z", "shell.execute_reply": "2021-02-12T21:59:51.944984Z" }, "id": "nQnmcm0oI1Q-" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "XljsiF6lYkuV" }, "source": [ "# 環境\n", "\n", "\n", " \n", " \n", " \n", " \n", "
TensorFlow.orgで表示 Google Colabで実行GitHub でソースを表示{ノートブックをダウンロード/a0}
" ] }, { "cell_type": "markdown", "metadata": { "id": "9h3B-YBHopJI" }, "source": [ "## はじめに" ] }, { "cell_type": "markdown", "metadata": { "id": "n9c6vCPGovOM" }, "source": [ "強化学習 (RL) の目標は、環境と対話することにより学習するエージェントを設計することです。標準的な RL のセットアップでは、エージェントは各タイムステップで観測を受け取り、行動を選択します。行動は環境に適用され、環境は報酬と新しい観察を返します。 エージェントは、報酬の合計 (リターン) を最大化する行動を選択するポリシーをトレーニングします。\n", "\n", "TF-Agent では、環境は Python または TensorFlow で実装できます。通常、Python 環境はより分かりやすく、実装やデバッグが簡単ですが、TensorFlow 環境はより効率的で自然な並列化が可能です。最も一般的なワークフローは、Python で環境を実装し、ラッパーを使用して自動的に TensorFlow に変換することです。\n", "\n", "最初に Python 環境を見てみましょう。TensorFlow 環境の API もよく似ています。" ] }, { "cell_type": "markdown", "metadata": { "id": "4_16bQF0anmE" }, "source": [ "## セットアップ\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Qax00bg2a4Jj" }, "source": [ "TF-Agent または gym をまだインストールしていない場合は、以下を実行します。" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T21:59:51.955647Z", "iopub.status.busy": "2021-02-12T21:59:51.954856Z", "iopub.status.idle": "2021-02-12T22:00:04.521047Z", "shell.execute_reply": "2021-02-12T22:00:04.521498Z" }, "id": "KKU2iY_7at8Y" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\n", "tf-agents 0.7.1 requires gym>=0.17.0, but you have gym 0.10.11 which is incompatible.\u001b[0m\r\n" ] } ], "source": [ "!pip install -q tf-agents\n", "!pip install -q 'gym==0.10.11'" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:04.528113Z", "iopub.status.busy": "2021-02-12T22:00:04.527355Z", "iopub.status.idle": "2021-02-12T22:00:11.562300Z", "shell.execute_reply": "2021-02-12T22:00:11.561649Z" }, "id": "1ZAoFNwnRbKK" }, "outputs": [], "source": [ "from __future__ import absolute_import\n", "from __future__ import division\n", "from __future__ import print_function\n", "\n", "import abc\n", "import tensorflow as tf\n", "import numpy as np\n", "\n", "from tf_agents.environments import py_environment\n", "from tf_agents.environments import tf_environment\n", "from tf_agents.environments import tf_py_environment\n", "from tf_agents.environments import utils\n", "from tf_agents.specs import array_spec\n", "from tf_agents.environments import wrappers\n", "from tf_agents.environments import suite_gym\n", "from tf_agents.trajectories import time_step as ts\n", "\n", "tf.compat.v1.enable_v2_behavior()" ] }, { "cell_type": "markdown", "metadata": { "id": "x-y4p9i9UURn" }, "source": [ "## Python 環境" ] }, { "cell_type": "markdown", "metadata": { "id": "JPSwHONKMNv9" }, "source": [ "Python環境には、環境に行動を適用し、次のステップに関する以下の情報を返す `step(action) -> next_time_step` メソッドがあります。\n", "\n", "1. `observation`:これは、エージェントが次のステップで行動を選択するために観察できる環境状態の一部です。\n", "2. `reward`:エージェントは、複数のステップにわたってこれらの報酬の合計を最大化することを学習します。\n", "3. `step_type`:環境との相互作用は通常、シーケンス/エピソードの一部です (チェスのゲームで複数の動きがあるように)。step_type は、`FIRST`、`MID`または`LAST`のいずれかで、このタイムステップがシーケンスの最初、中間、または最後のステップかどうかを示します。\n", "4. `discount`:これは、現在のタイムステップでの報酬に対する次のタイムステップでの報酬の重み付けを表す浮動小数です。\n", "\n", "これらは、名前付きタプル`TimeStep(step_type, reward, discount, observation)`にグループ化されます。\n", "\n", "すべての Python 環境で実装する必要があるインターフェースは、`environments/py_environment.PyEnvironment`です。主なメソッドは、以下のとおりです。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.570803Z", "iopub.status.busy": "2021-02-12T22:00:11.570098Z", "iopub.status.idle": "2021-02-12T22:00:11.572441Z", "shell.execute_reply": "2021-02-12T22:00:11.571913Z" }, "id": "GlD2Dd2vUTtg" }, "outputs": [], "source": [ "class PyEnvironment(object):\n", "\n", " def reset(self):\n", " \"\"\"Return initial_time_step.\"\"\"\n", " self._current_time_step = self._reset()\n", " return self._current_time_step\n", "\n", " def step(self, action):\n", " \"\"\"Apply action and return new time_step.\"\"\"\n", " if self._current_time_step is None:\n", " return self.reset()\n", " self._current_time_step = self._step(action)\n", " return self._current_time_step\n", "\n", " def current_time_step(self):\n", " return self._current_time_step\n", "\n", " def time_step_spec(self):\n", " \"\"\"Return time_step_spec.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def observation_spec(self):\n", " \"\"\"Return observation_spec.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def action_spec(self):\n", " \"\"\"Return action_spec.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def _reset(self):\n", " \"\"\"Return initial_time_step.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def _step(self, action):\n", " \"\"\"Apply action and return new time_step.\"\"\"\n", " self._current_time_step = self._step(action)\n", " return self._current_time_step" ] }, { "cell_type": "markdown", "metadata": { "id": "zfF8koryiGPR" }, "source": [ " `step()`メソッドに加えて、環境では、新しいシーケンスを開始して新規`TimeStep`を提供する`reset()`メソッドも提供されます。`reset`メソッドを明示的に呼び出す必要はありません。エピソードの最後、またはstep()が初めて呼び出されたときに、環境は自動的にリセットされると想定されています。\n", "\n", "サブクラスは`step()`または`reset()`を直接実装しないことに注意してください。代わりに、`_step()`および`_reset()`メソッドをオーバーライドします。これらのメソッドから返されたタイムステップはキャッシュされ、`current_time_step()`を通じて公開されます。\n", "\n", "`observation_spec`および`action_spec`メソッドは`(Bounded)ArraySpecs`のネストを返します。このネストは観測と行動の名前、形状、データ型、範囲をそれぞれ記述します。\n", "\n", "TF-Agent では、リスト、タプル、名前付きタプル、またはディクショナリからなるツリー構造で定義されるネストを繰り返し参照します。これらは、観察と行動の構造を維持するために任意に構成できます。これは、多くの観察と行動がある、より複雑な環境で非常に役立ちます。" ] }, { "cell_type": "markdown", "metadata": { "id": "r63R-RbjcIRw" }, "source": [ "### 標準環境の使用\n", "\n", "TF Agent には、`py_environment.PyEnvironment`インターフェースに準拠するように、OpenAI Gym、DeepMind-control、Atari などの多くの標準環境用のラッパーが組み込まれていています。これらのラップされた環境は、環境スイートを使用して簡単に読み込めます。OpenAI Gym から CartPole 環境を読み込み、行動と time_step_spec を見てみましょう。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.577993Z", "iopub.status.busy": "2021-02-12T22:00:11.577264Z", "iopub.status.idle": "2021-02-12T22:00:11.698683Z", "shell.execute_reply": "2021-02-12T22:00:11.698130Z" }, "id": "1kBPE5T-nb2-" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "action_spec: BoundedArraySpec(shape=(), dtype=dtype('int64'), name='action', minimum=0, maximum=1)\n", "time_step_spec.observation: BoundedArraySpec(shape=(4,), dtype=dtype('float32'), name='observation', minimum=[-4.8000002e+00 -3.4028235e+38 -4.1887903e-01 -3.4028235e+38], maximum=[4.8000002e+00 3.4028235e+38 4.1887903e-01 3.4028235e+38])\n", "time_step_spec.step_type: ArraySpec(shape=(), dtype=dtype('int32'), name='step_type')\n", "time_step_spec.discount: BoundedArraySpec(shape=(), dtype=dtype('float32'), name='discount', minimum=0.0, maximum=1.0)\n", "time_step_spec.reward: ArraySpec(shape=(), dtype=dtype('float32'), name='reward')\n" ] } ], "source": [ "environment = suite_gym.load('CartPole-v0')\n", "print('action_spec:', environment.action_spec())\n", "print('time_step_spec.observation:', environment.time_step_spec().observation)\n", "print('time_step_spec.step_type:', environment.time_step_spec().step_type)\n", "print('time_step_spec.discount:', environment.time_step_spec().discount)\n", "print('time_step_spec.reward:', environment.time_step_spec().reward)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "vWXOC863Apo_" }, "source": [ "環境は [0, 1] の`int64`タイプの行動を予期し、 `TimeSteps`を返します。観測値は長さ 4 の`float32`ベクトルであり、割引係数は [0.0, 1.0] の`float32`です。では、エピソード全体に対して固定した行動`(1,)`を実行してみましょう。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.713431Z", "iopub.status.busy": "2021-02-12T22:00:11.712644Z", "iopub.status.idle": "2021-02-12T22:00:11.715658Z", "shell.execute_reply": "2021-02-12T22:00:11.715076Z" }, "id": "AzIbOJ0-0y12" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TimeStep(step_type=array(0, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.02262211, -0.01128484, -0.01841794, 0.00293427], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.02239642, 0.18409634, -0.01835925, -0.29550236], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.02607834, 0.37947515, -0.0242693 , -0.5939185 ], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.03366784, 0.5749282 , -0.03614767, -0.89414626], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.04516641, 0.7705213 , -0.05403059, -1.1979693 ], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.06057683, 0.96629924, -0.07798998, -1.5070848 ], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.07990282, 1.1622754 , -0.10813168, -1.8230615 ], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.10314833, 1.358419 , -0.14459291, -2.1472874 ], dtype=float32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32), observation=array([ 0.1303167 , 1.5546396 , -0.18753865, -2.480909 ], dtype=float32))\n", "TimeStep(step_type=array(2, dtype=int32), reward=array(1., dtype=float32), discount=array(0., dtype=float32), observation=array([ 0.1614095 , 1.7507701 , -0.23715684, -2.8247602 ], dtype=float32))\n" ] } ], "source": [ "action = np.array(1, dtype=np.int32)\n", "time_step = environment.reset()\n", "print(time_step)\n", "while not time_step.is_last():\n", " time_step = environment.step(action)\n", " print(time_step)" ] }, { "cell_type": "markdown", "metadata": { "id": "4xAbBl4_PMtA" }, "source": [ "### 独自 Python 環境の作成\n", "\n", "多くの場合、一般的に、TF-Agent の標準エージェント (agents/を参照) の 1 つが問題に適用されます。そのためには、問題を環境としてまとめる必要があります。 次に、Python で環境を実装する方法を見てみましょう。\n", "\n", "次の (ブラックジャックのような ) カードゲームをプレイするようにエージェントをトレーニングするとします。\n", "\n", "1. ゲームは、1~10 の数値が付けられた無限のカード一式を使用してプレイします。\n", "2. 毎回、エージェントは2つの行動 (新しいランダムカードを取得する、またはその時点のラウンドを停止する) を実行できます。\n", "3. 目標はラウンド終了時にカードの合計を 21 にできるだけ近づけることです。\n", "\n", "ゲームを表す環境は次のようになります。\n", "\n", "1. 行動:2 つの行動があります( 行動 0:新しいカードを取得、行動1:その時点のラウンドを終了)。\n", "2. 観察:その時点のラウンドのカードの合計。\n", "3. 報酬:目標は、21 にできるだけ近づけることなので、ラウンド終了時に次の報酬を使用します。sum_of_cards - 21 if sum_of_cards <= 21, else -21\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.726054Z", "iopub.status.busy": "2021-02-12T22:00:11.725292Z", "iopub.status.idle": "2021-02-12T22:00:11.727496Z", "shell.execute_reply": "2021-02-12T22:00:11.727939Z" }, "id": "9HD0cDykPL6I" }, "outputs": [], "source": [ "class CardGameEnv(py_environment.PyEnvironment):\n", "\n", " def __init__(self):\n", " self._action_spec = array_spec.BoundedArraySpec(\n", " shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')\n", " self._observation_spec = array_spec.BoundedArraySpec(\n", " shape=(1,), dtype=np.int32, minimum=0, name='observation')\n", " self._state = 0\n", " self._episode_ended = False\n", "\n", " def action_spec(self):\n", " return self._action_spec\n", "\n", " def observation_spec(self):\n", " return self._observation_spec\n", "\n", " def _reset(self):\n", " self._state = 0\n", " self._episode_ended = False\n", " return ts.restart(np.array([self._state], dtype=np.int32))\n", "\n", " def _step(self, action):\n", "\n", " if self._episode_ended:\n", " # The last action ended the episode. Ignore the current action and start\n", " # a new episode.\n", " return self.reset()\n", "\n", " # Make sure episodes don't go on forever.\n", " if action == 1:\n", " self._episode_ended = True\n", " elif action == 0:\n", " new_card = np.random.randint(1, 11)\n", " self._state += new_card\n", " else:\n", " raise ValueError('`action` should be 0 or 1.')\n", "\n", " if self._episode_ended or self._state >= 21:\n", " reward = self._state - 21 if self._state <= 21 else -21\n", " return ts.termination(np.array([self._state], dtype=np.int32), reward)\n", " else:\n", " return ts.transition(\n", " np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)" ] }, { "cell_type": "markdown", "metadata": { "id": "LYEwyX7QsqeX" }, "source": [ "上記の環境がすべて正しく定義されていることを確認しましょう。独自の環境を作成する場合、生成された観測と time_steps が仕様で定義されている正しい形状とタイプに従っていることを確認する必要があります。これらは TensorFlow グラフの生成に使用されるため、問題が発生するとデバッグが困難になる可能性があります。\n", "\n", "この環境を検証するために、ランダムなポリシーを使用して行動を生成し、5 つのエピソードでイテレーションを実行し、意図したとおりに機能していることを確認します。環境の仕様に従っていない time_step を受け取ると、エラーが発生します。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.744928Z", "iopub.status.busy": "2021-02-12T22:00:11.744143Z", "iopub.status.idle": "2021-02-12T22:00:11.746936Z", "shell.execute_reply": "2021-02-12T22:00:11.746317Z" }, "id": "6Hhm-5R7spVx" }, "outputs": [], "source": [ "environment = CardGameEnv()\n", "utils.validate_py_environment(environment, episodes=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "_36eM7MvkNOg" }, "source": [ "環境が意図するとおりに機能していることが確認できたので、固定ポリシーを使用してこの環境を実行してみましょう。3 枚のカードを要求して、ラウンドを終了します。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.757855Z", "iopub.status.busy": "2021-02-12T22:00:11.756856Z", "iopub.status.idle": "2021-02-12T22:00:11.760608Z", "shell.execute_reply": "2021-02-12T22:00:11.760042Z" }, "id": "FILylafAkMEx" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TimeStep(step_type=array(0, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([0], dtype=int32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([6], dtype=int32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([11], dtype=int32))\n", "TimeStep(step_type=array(1, dtype=int32), reward=array(0., dtype=float32), discount=array(1., dtype=float32), observation=array([19], dtype=int32))\n", "TimeStep(step_type=array(2, dtype=int32), reward=array(-2., dtype=float32), discount=array(0., dtype=float32), observation=array([19], dtype=int32))\n", "Final Reward = -2.0\n" ] } ], "source": [ "get_new_card_action = np.array(0, dtype=np.int32)\n", "end_round_action = np.array(1, dtype=np.int32)\n", "\n", "environment = CardGameEnv()\n", "time_step = environment.reset()\n", "print(time_step)\n", "cumulative_reward = time_step.reward\n", "\n", "for _ in range(3):\n", " time_step = environment.step(get_new_card_action)\n", " print(time_step)\n", " cumulative_reward += time_step.reward\n", "\n", "time_step = environment.step(end_round_action)\n", "print(time_step)\n", "cumulative_reward += time_step.reward\n", "print('Final Reward = ', cumulative_reward)" ] }, { "cell_type": "markdown", "metadata": { "id": "_vBLPN3ioyGx" }, "source": [ "### 環境ラッパー\n", "\n", "環境ラッパーは Python 環境を取り、環境の変更されたバージョンを返します。元の環境と変更された環境はどちらも`py_environment.PyEnvironment`のインスタンスであり、複数のラッパーをチェーン化することもできます。\n", "\n", "一般的なラッパーは`environments/wrappers.py`にあります。 例:\n", "\n", "1. `ActionDiscretizeWrapper`:連続空間で定義された行動を離散化された行動に変換します。\n", "2. `RunStats`: 実行したステップ数、完了したエピソード数など、環境の実行統計をキャプチャします。\n", "3. `TimeLimit`:一定のステップ数の後にエピソードを終了します。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "_8aIybRdnFfb" }, "source": [ "#### 例1:行動離散化ラッパー" ] }, { "cell_type": "markdown", "metadata": { "id": "YIaxJRUpvfyc" }, "source": [ "InvertedPendulumは、`[-2, 2]`の範囲の連続行動を受け入れる PyBullet 環境です。この環境で DQN などの離散行動エージェントをトレーニングする場合は、行動空間を離散化(量子化)する必要があります。`ActionDiscretizeWrapper`は、これを行います。ラップ前とラップ後の`action_spec`を比較しましょう。" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.768033Z", "iopub.status.busy": "2021-02-12T22:00:11.767202Z", "iopub.status.idle": "2021-02-12T22:00:11.770586Z", "shell.execute_reply": "2021-02-12T22:00:11.770087Z" }, "id": "AJxEoZ4HoyjR" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Action Spec: BoundedArraySpec(shape=(1,), dtype=dtype('float32'), name='action', minimum=-2.0, maximum=2.0)\n", "Discretized Action Spec: BoundedArraySpec(shape=(), dtype=dtype('int32'), name='action', minimum=0, maximum=4)\n" ] } ], "source": [ "env = suite_gym.load('Pendulum-v0')\n", "print('Action Spec:', env.action_spec())\n", "\n", "discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)\n", "print('Discretized Action Spec:', discrete_action_env.action_spec())" ] }, { "cell_type": "markdown", "metadata": { "id": "njFjW8bmwTWJ" }, "source": [ "ラップされた`discrete_action_env`は、`py_environment.PyEnvironment`のインスタンスで、通常の python 環境のように扱うことができます。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "8l5dwAhsP_F_" }, "source": [ "## TensorFlow 環境" ] }, { "cell_type": "markdown", "metadata": { "id": "iZG39AjBkTjr" }, "source": [ "TF 環境のインターフェースは`environments/tf_environment.TFEnvironment`で定義されており、Python 環境とよく似ています。ただし、TF 環境は以下の点で Python 環境と異なります。\n", "\n", "- 配列の代わりにテンソルオブジェクトを生成する\n", "- TF 環境は、仕様と比較したときに生成されたテンソルにバッチディメンションを追加します\n", "\n", "Python環境をTF環境に変換すると、TensorFlowで操作を並列化できます。たとえば、環境からデータを収集して`replay_buffer`に追加する`collect_experience_op`、および、`replay_buffer`から読み取り、エージェントをトレーニングする`train_op`を定義し、TensorFlowで自然に並列実行することができます。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.777708Z", "iopub.status.busy": "2021-02-12T22:00:11.776960Z", "iopub.status.idle": "2021-02-12T22:00:11.779609Z", "shell.execute_reply": "2021-02-12T22:00:11.779092Z" }, "id": "WKBDDZqKTxsL" }, "outputs": [], "source": [ "class TFEnvironment(object):\n", "\n", " def time_step_spec(self):\n", " \"\"\"Describes the `TimeStep` tensors returned by `step()`.\"\"\"\n", "\n", " def observation_spec(self):\n", " \"\"\"Defines the `TensorSpec` of observations provided by the environment.\"\"\"\n", "\n", " def action_spec(self):\n", " \"\"\"Describes the TensorSpecs of the action expected by `step(action)`.\"\"\"\n", "\n", " def reset(self):\n", " \"\"\"Returns the current `TimeStep` after resetting the Environment.\"\"\"\n", " return self._reset()\n", "\n", " def current_time_step(self):\n", " \"\"\"Returns the current `TimeStep`.\"\"\"\n", " return self._current_time_step()\n", "\n", " def step(self, action):\n", " \"\"\"Applies the action and returns the new `TimeStep`.\"\"\"\n", " return self._step(action)\n", "\n", " @abc.abstractmethod\n", " def _reset(self):\n", " \"\"\"Returns the current `TimeStep` after resetting the Environment.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def _current_time_step(self):\n", " \"\"\"Returns the current `TimeStep`.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def _step(self, action):\n", " \"\"\"Applies the action and returns the new `TimeStep`.\"\"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "tFkBIA92ThWf" }, "source": [ "`current_time_step()`メソッドは現在の time_step を返し、必要に応じて環境を初期化します。\n", "\n", "`reset()`メソッドは環境を強制的にリセットし、current_step を返します。\n", "\n", "`action`が以前の`time_step`に依存しない場合、`Graph`モードでは`tf.control_dependency`が必要です。\n", "\n", "ここでは、`TFEnvironments`を作成する方法を見ていきます。" ] }, { "cell_type": "markdown", "metadata": { "id": "S6wS3AaLdVLT" }, "source": [ "### 独自 TensorFlow 環境の作成\n", "\n", "これは Python で環境を作成するよりも複雑であるため、この Colab では取り上げません。例は[こちら](https://github.com/tensorflow/agents/blob/master/tf_agents/environments/tf_environment_test.py)からご覧いただけます。より一般的な使用例は、Python で環境を実装し、`TFPyEnvironment` ラッパーを使用して TensorFlow でラップすることです (以下を参照)。" ] }, { "cell_type": "markdown", "metadata": { "id": "V_Ny2lb-dU5R" }, "source": [ "### TensorFlow で Python 環境をラップ" ] }, { "cell_type": "markdown", "metadata": { "id": "Lv4-UcurZ8nb" }, "source": [ "`TFPyEnvironment`ラッパーを使用すると、任意の Python 環境を TensorFlow 環境に簡単にラップできます。" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.787101Z", "iopub.status.busy": "2021-02-12T22:00:11.786350Z", "iopub.status.idle": "2021-02-12T22:00:11.793603Z", "shell.execute_reply": "2021-02-12T22:00:11.793110Z" }, "id": "UYerqyNfnVRL" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "TimeStep Specs: TimeStep(step_type=TensorSpec(shape=(), dtype=tf.int32, name='step_type'), reward=TensorSpec(shape=(), dtype=tf.float32, name='reward'), discount=BoundedTensorSpec(shape=(), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)), observation=BoundedTensorSpec(shape=(4,), dtype=tf.float32, name='observation', minimum=array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38],\n", " dtype=float32), maximum=array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38],\n", " dtype=float32)))\n", "Action Specs: BoundedTensorSpec(shape=(), dtype=tf.int64, name='action', minimum=array(0), maximum=array(1))\n" ] } ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", "\n", "print(isinstance(tf_env, tf_environment.TFEnvironment))\n", "print(\"TimeStep Specs:\", tf_env.time_step_spec())\n", "print(\"Action Specs:\", tf_env.action_spec())" ] }, { "cell_type": "markdown", "metadata": { "id": "z3WFrnX9CNpC" }, "source": [ "仕様のタイプが`(Bounded)TensorSpec`になっていることに注意してください。" ] }, { "cell_type": "markdown", "metadata": { "id": "vQPvC1ARYALj" }, "source": [ "### 使用例" ] }, { "cell_type": "markdown", "metadata": { "id": "ov7EIrk8dKUU" }, "source": [ "#### 簡単な例" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:11.802526Z", "iopub.status.busy": "2021-02-12T22:00:11.801895Z", "iopub.status.idle": "2021-02-12T22:00:13.589257Z", "shell.execute_reply": "2021-02-12T22:00:13.589679Z" }, "id": "gdvFqUqbdB7u" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[TimeStep(step_type=array([0], dtype=int32), reward=array([0.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.02205312, -0.02536285, 0.03750031, -0.03571422]],\n", " dtype=float32)), array([0], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.02154586, -0.22100194, 0.03678603, 0.2685606 ]],\n", " dtype=float32))]\n", "[TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.02154586, -0.22100194, 0.03678603, 0.2685606 ]],\n", " dtype=float32)), array([1], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.01712583, -0.02642375, 0.04215724, -0.01229658]],\n", " dtype=float32))]\n", "[TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.01712583, -0.02642375, 0.04215724, -0.01229658]],\n", " dtype=float32)), array([0], dtype=int32), TimeStep(step_type=array([1], dtype=int32), reward=array([1.], dtype=float32), discount=array([1.], dtype=float32), observation=array([[ 0.01659735, -0.22212414, 0.04191131, 0.29338375]],\n", " dtype=float32))]\n", "Total reward: [3.]\n" ] } ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", "# reset() creates the initial time_step after resetting the environment.\n", "time_step = tf_env.reset()\n", "num_steps = 3\n", "transitions = []\n", "reward = 0\n", "for i in range(num_steps):\n", " action = tf.constant([i % 2])\n", " # applies the action and returns the new TimeStep.\n", " next_time_step = tf_env.step(action)\n", " transitions.append([time_step, action, next_time_step])\n", " reward += next_time_step.reward\n", " time_step = next_time_step\n", "\n", "np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)\n", "print('\\n'.join(map(str, np_transitions)))\n", "print('Total reward:', reward.numpy())" ] }, { "cell_type": "markdown", "metadata": { "id": "kWs48LNsdLnc" }, "source": [ "#### 全エピソード" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2021-02-12T22:00:13.599388Z", "iopub.status.busy": "2021-02-12T22:00:13.598546Z", "iopub.status.idle": "2021-02-12T22:00:13.717169Z", "shell.execute_reply": "2021-02-12T22:00:13.716635Z" }, "id": "t561kUXMk-KM" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "num_episodes: 5 num_steps: 87\n", "avg_length 17.4 avg_reward: 17.4\n" ] } ], "source": [ "env = suite_gym.load('CartPole-v0')\n", "tf_env = tf_py_environment.TFPyEnvironment(env)\n", "\n", "time_step = tf_env.reset()\n", "rewards = []\n", "steps = []\n", "num_episodes = 5\n", "\n", "for _ in range(num_episodes):\n", " episode_reward = 0\n", " episode_steps = 0\n", " while not time_step.is_last():\n", " action = tf.random.uniform([1], 0, 2, dtype=tf.int32)\n", " time_step = tf_env.step(action)\n", " episode_steps += 1\n", " episode_reward += time_step.reward.numpy()\n", " rewards.append(episode_reward)\n", " steps.append(episode_steps)\n", " time_step = tf_env.reset()\n", "\n", "num_steps = np.sum(steps)\n", "avg_length = np.mean(steps)\n", "avg_reward = np.mean(rewards)\n", "\n", "print('num_episodes:', num_episodes, 'num_steps:', num_steps)\n", "print('avg_length', avg_length, 'avg_reward:', avg_reward)" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "2_environments_tutorial.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 0 }