{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "klGNgWREsvQv" }, "source": [ "##### Copyright 2023 The TF-Agents Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2024-03-09T12:24:25.231692Z", "iopub.status.busy": "2024-03-09T12:24:25.231193Z", "iopub.status.idle": "2024-03-09T12:24:25.234825Z", "shell.execute_reply": "2024-03-09T12:24:25.234214Z" }, "id": "nQnmcm0oI1Q-" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "HqslkUeyEJFg" }, "source": [ "# Tutorial on Multi Armed Bandits in TF-Agents" ] }, { "cell_type": "markdown", "metadata": { "id": "MimUC9NrYFaS" }, "source": [ "### Get Started\n", "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", " View on TensorFlow.org\n", " \n", " \n", " \n", " Run in Google Colab\n", " \n", " \n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
\n" ] }, { "cell_type": "markdown", "metadata": { "id": "1u9QVVsShC9X" }, "source": [ "### Setup" ] }, { "cell_type": "markdown", "metadata": { "id": "kNrNXKI7bINP" }, "source": [ "If you haven't installed the following dependencies, run:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:25.238441Z", "iopub.status.busy": "2024-03-09T12:24:25.237958Z", "iopub.status.idle": "2024-03-09T12:24:34.765217Z", "shell.execute_reply": "2024-03-09T12:24:34.764342Z" }, "id": "KEHR2Ui-lo8O" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Collecting tf-agents\r\n", " Using cached tf_agents-0.19.0-py3-none-any.whl.metadata (12 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: absl-py>=0.6.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (1.4.0)\r\n", "Collecting cloudpickle>=1.3 (from tf-agents)\r\n", " Using cached cloudpickle-3.0.0-py3-none-any.whl.metadata (7.0 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Collecting gin-config>=0.4.0 (from tf-agents)\r\n", " Using cached gin_config-0.5.0-py3-none-any.whl.metadata (2.9 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Collecting gym<=0.23.0,>=0.17.0 (from tf-agents)\r\n", " Using cached gym-0.23.0-py3-none-any.whl\r\n", "Requirement already satisfied: numpy>=1.19.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (1.26.4)\r\n", "Requirement already satisfied: pillow in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (10.2.0)\r\n", "Requirement already satisfied: six>=1.10.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (1.16.0)\r\n", "Requirement already satisfied: protobuf>=3.11.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (3.20.3)\r\n", "Requirement already satisfied: wrapt>=1.11.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-agents) (1.16.0)\r\n", "Collecting typing-extensions==4.5.0 (from tf-agents)\r\n", " Using cached typing_extensions-4.5.0-py3-none-any.whl.metadata (8.5 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Collecting pygame==2.1.3 (from tf-agents)\r\n", " Using cached pygame-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.3 kB)\r\n", "Collecting tensorflow-probability~=0.23.0 (from tf-agents)\r\n", " Using cached tensorflow_probability-0.23.0-py2.py3-none-any.whl.metadata (13 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Collecting gym-notices>=0.0.4 (from gym<=0.23.0,>=0.17.0->tf-agents)\r\n", " Using cached gym_notices-0.0.8-py3-none-any.whl.metadata (1.0 kB)\r\n", "Requirement already satisfied: importlib-metadata>=4.10.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from gym<=0.23.0,>=0.17.0->tf-agents) (7.0.2)\r\n", "Requirement already satisfied: decorator in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-probability~=0.23.0->tf-agents) (5.1.1)\r\n", "Requirement already satisfied: gast>=0.3.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-probability~=0.23.0->tf-agents) (0.5.4)\r\n", "Requirement already satisfied: dm-tree in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-probability~=0.23.0->tf-agents) (0.1.8)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: zipp>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from importlib-metadata>=4.10.0->gym<=0.23.0,>=0.17.0->tf-agents) (3.17.0)\r\n", "Using cached tf_agents-0.19.0-py3-none-any.whl (1.4 MB)\r\n", "Using cached pygame-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.7 MB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Using cached typing_extensions-4.5.0-py3-none-any.whl (27 kB)\r\n", "Using cached cloudpickle-3.0.0-py3-none-any.whl (20 kB)\r\n", "Using cached gin_config-0.5.0-py3-none-any.whl (61 kB)\r\n", "Using cached tensorflow_probability-0.23.0-py2.py3-none-any.whl (6.9 MB)\r\n", "Using cached gym_notices-0.0.8-py3-none-any.whl (3.0 kB)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: gym-notices, gin-config, typing-extensions, pygame, cloudpickle, tensorflow-probability, gym, tf-agents\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " Attempting uninstall: typing-extensions\r\n", " Found existing installation: typing_extensions 4.10.0\r\n", " Uninstalling typing_extensions-4.10.0:\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " Successfully uninstalled typing_extensions-4.10.0\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Successfully installed cloudpickle-3.0.0 gin-config-0.5.0 gym-0.23.0 gym-notices-0.0.8 pygame-2.1.3 tensorflow-probability-0.23.0 tf-agents-0.19.0 typing-extensions-4.5.0\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: tf-keras in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (2.16.0)\r\n", "Requirement already satisfied: tensorflow<2.17,>=2.16 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tf-keras) (2.16.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: absl-py>=1.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.4.0)\r\n", "Requirement already satisfied: astunparse>=1.6.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.6.3)\r\n", "Requirement already satisfied: flatbuffers>=23.5.26 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (24.3.7)\r\n", "Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (0.5.4)\r\n", "Requirement already satisfied: google-pasta>=0.1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (0.2.0)\r\n", "Requirement already satisfied: h5py>=3.10.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (3.10.0)\r\n", "Requirement already satisfied: libclang>=13.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (16.0.6)\r\n", "Requirement already satisfied: ml-dtypes~=0.3.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (0.3.2)\r\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (3.3.0)\r\n", "Requirement already satisfied: packaging in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (23.2)\r\n", "Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<5.0.0dev,>=3.20.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (3.20.3)\r\n", "Requirement already satisfied: requests<3,>=2.21.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (2.31.0)\r\n", "Requirement already satisfied: setuptools in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (69.1.1)\r\n", "Requirement already satisfied: six>=1.12.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.16.0)\r\n", "Requirement already satisfied: termcolor>=1.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (2.4.0)\r\n", "Requirement already satisfied: typing-extensions>=3.6.6 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (4.5.0)\r\n", "Requirement already satisfied: wrapt>=1.11.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.16.0)\r\n", "Requirement already satisfied: grpcio<2.0,>=1.24.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.62.1)\r\n", "Requirement already satisfied: tensorboard<2.17,>=2.16 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (2.16.2)\r\n", "Requirement already satisfied: keras>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (3.0.5)\r\n", "Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (0.36.0)\r\n", "Requirement already satisfied: numpy<2.0.0,>=1.23.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.17,>=2.16->tf-keras) (1.26.4)\r\n", "Requirement already satisfied: wheel<1.0,>=0.23.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from astunparse>=1.6.0->tensorflow<2.17,>=2.16->tf-keras) (0.41.2)\r\n", "Requirement already satisfied: rich in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (13.7.1)\r\n", "Requirement already satisfied: namex in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (0.0.7)\r\n", "Requirement already satisfied: dm-tree in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (0.1.8)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: charset-normalizer<4,>=2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests<3,>=2.21.0->tensorflow<2.17,>=2.16->tf-keras) (3.3.2)\r\n", "Requirement already satisfied: idna<4,>=2.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests<3,>=2.21.0->tensorflow<2.17,>=2.16->tf-keras) (3.6)\r\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests<3,>=2.21.0->tensorflow<2.17,>=2.16->tf-keras) (2.2.1)\r\n", "Requirement already satisfied: certifi>=2017.4.17 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests<3,>=2.21.0->tensorflow<2.17,>=2.16->tf-keras) (2024.2.2)\r\n", "Requirement already satisfied: markdown>=2.6.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (3.5.2)\r\n", "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (0.7.2)\r\n", "Requirement already satisfied: werkzeug>=1.0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (3.0.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: importlib-metadata>=4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from markdown>=2.6.8->tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (7.0.2)\r\n", "Requirement already satisfied: MarkupSafe>=2.1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from werkzeug>=1.0.1->tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (2.1.5)\r\n", "Requirement already satisfied: markdown-it-py>=2.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from rich->keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (3.0.0)\r\n", "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from rich->keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (2.17.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: zipp>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.17,>=2.16->tensorflow<2.17,>=2.16->tf-keras) (3.17.0)\r\n", "Requirement already satisfied: mdurl~=0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from markdown-it-py>=2.2.0->rich->keras>=3.0.0->tensorflow<2.17,>=2.16->tf-keras) (0.1.2)\r\n" ] } ], "source": [ "!pip install tf-agents\n", "!pip install tf-keras" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:34.769181Z", "iopub.status.busy": "2024-03-09T12:24:34.768904Z", "iopub.status.idle": "2024-03-09T12:24:34.772797Z", "shell.execute_reply": "2024-03-09T12:24:34.772275Z" }, "id": "WPuD0bMEY9Iz" }, "outputs": [], "source": [ "import os\n", "# Keep using keras-2 (tf-keras) rather than keras-3 (keras).\n", "os.environ['TF_USE_LEGACY_KERAS'] = '1'" ] }, { "cell_type": "markdown", "metadata": { "id": "O7gLdUS6b2EG" }, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:34.775472Z", "iopub.status.busy": "2024-03-09T12:24:34.775251Z", "iopub.status.idle": "2024-03-09T12:24:37.647347Z", "shell.execute_reply": "2024-03-09T12:24:37.646329Z" }, "id": "3oCS94Z83Jo2" }, "outputs": [], "source": [ "import abc\n", "import numpy as np\n", "import tensorflow as tf\n", "\n", "from tf_agents.agents import tf_agent\n", "from tf_agents.drivers import driver\n", "from tf_agents.environments import py_environment\n", "from tf_agents.environments import tf_environment\n", "from tf_agents.environments import tf_py_environment\n", "from tf_agents.policies import tf_policy\n", "from tf_agents.specs import array_spec\n", "from tf_agents.specs import tensor_spec\n", "from tf_agents.trajectories import time_step as ts\n", "from tf_agents.trajectories import trajectory\n", "from tf_agents.trajectories import policy_step\n", "\n", "nest = tf.nest" ] }, { "cell_type": "markdown", "metadata": { "id": "CcIob6rYqien" }, "source": [ "# Introduction\n" ] }, { "cell_type": "markdown", "metadata": { "id": "JdnTJrzaeft3" }, "source": [ "The Multi-Armed Bandit problem (MAB) is a special case of Reinforcement Learning: an agent collects rewards in an environment by taking some actions after observing some state of the environment. The main difference between general RL and MAB is that in MAB, we assume that the action taken by the agent does not influence the next state of the environment. Therefore, agents do not model state transitions, credit rewards to past actions, or \"plan ahead\" to get to reward-rich states.\n", "\n", "As in other RL domains, the goal of a MAB *agent* is to find a *policy* that collects as much reward as possible. It would be a mistake, however, to always try to exploit the action that promises the highest reward, because then there is a chance that we miss out on better actions if we do not explore enough. This is the main problem to be solved in (MAB), often called the *exploration-exploitation dilemma*.\n", "\n", "\n", "\n", "Bandit environments, policies, and agents for MAB can be found in subdirectories of [tf_agents/bandits](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits)." ] }, { "cell_type": "markdown", "metadata": { "id": "iPzsBCTperx3" }, "source": [ "# Environments" ] }, { "cell_type": "markdown", "metadata": { "id": "1LOXW8i320Cp" }, "source": [ "In TF-Agents, the environment class serves the role of giving information on the current state (this is called **observation** or **context**), receiving an action as input, performing a state transition, and outputting a reward. This class also takes care of resetting when an episode ends, so that a new episode can start. This is realized by calling a `reset` function when a state is labelled as \"last\" of the episode.\n", "\n", "For more details, see the [TF-Agents environments tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb).\n", "\n", "As mentioned above, MAB differs from general RL in that actions do not influence the next observation. Another difference is that in Bandits, there are no \"episodes\": every time step starts with a new observation, independently of previous time steps.\n", "\n", "To make sure observations are independent and to abstract away the concept of RL episodes, we introduce subclasses of `PyEnvironment` and `TFEnvironment`: [BanditPyEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/bandit_py_environment.py) and [BanditTFEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/bandit_tf_environment.py). These classes expose two private member functions that remain to be implemented by the user:\n", "\n", "```python\n", "@abc.abstractmethod\n", "def _observe(self):\n", "```\n", "and\n", "```python\n", "@abc.abstractmethod\n", "def _apply_action(self, action):\n", "```\n", "The `_observe` function returns an observation. Then, the policy chooses an action based on this observation. The `_apply_action` receives that action as an input, and returns the corresponding reward. These private member functions are called by the functions `reset` and `step`, respectively." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.652111Z", "iopub.status.busy": "2024-03-09T12:24:37.651693Z", "iopub.status.idle": "2024-03-09T12:24:37.658804Z", "shell.execute_reply": "2024-03-09T12:24:37.657973Z" }, "id": "TTaG2ZapQvHX" }, "outputs": [], "source": [ "class BanditPyEnvironment(py_environment.PyEnvironment):\n", "\n", " def __init__(self, observation_spec, action_spec):\n", " self._observation_spec = observation_spec\n", " self._action_spec = action_spec\n", " super(BanditPyEnvironment, self).__init__()\n", "\n", " # Helper functions.\n", " def action_spec(self):\n", " return self._action_spec\n", "\n", " def observation_spec(self):\n", " return self._observation_spec\n", "\n", " def _empty_observation(self):\n", " return tf.nest.map_structure(lambda x: np.zeros(x.shape, x.dtype),\n", " self.observation_spec())\n", "\n", " # These two functions below should not be overridden by subclasses.\n", " def _reset(self):\n", " \"\"\"Returns a time step containing an observation.\"\"\"\n", " return ts.restart(self._observe(), batch_size=self.batch_size)\n", "\n", " def _step(self, action):\n", " \"\"\"Returns a time step containing the reward for the action taken.\"\"\"\n", " reward = self._apply_action(action)\n", " return ts.termination(self._observe(), reward)\n", "\n", " # These two functions below are to be implemented in subclasses.\n", " @abc.abstractmethod\n", " def _observe(self):\n", " \"\"\"Returns an observation.\"\"\"\n", "\n", " @abc.abstractmethod\n", " def _apply_action(self, action):\n", " \"\"\"Applies `action` to the Environment and returns the corresponding reward.\n", " \"\"\"" ] }, { "cell_type": "markdown", "metadata": { "id": "ZVtLk28xVo0j" }, "source": [ "The above interim abstract class implements `PyEnvironment`'s `_reset` and `_step` functions and exposes the abstract functions `_observe` and `_apply_action` to be implemented by subclasses." ] }, { "cell_type": "markdown", "metadata": { "id": "xQbI-6PdtSJn" }, "source": [ "## A Simple Example Environment Class" ] }, { "cell_type": "markdown", "metadata": { "id": "8qspwAx0tS6l" }, "source": [ "The following class gives a very simple environment for which the observation is a random integer between -2 and 2, there are 3 possible actions (0, 1, 2), and the reward is the product of the action and the observation." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.662204Z", "iopub.status.busy": "2024-03-09T12:24:37.661953Z", "iopub.status.idle": "2024-03-09T12:24:37.667269Z", "shell.execute_reply": "2024-03-09T12:24:37.666399Z" }, "id": "YV6DhsSi227-" }, "outputs": [], "source": [ "class SimplePyEnvironment(BanditPyEnvironment):\n", "\n", " def __init__(self):\n", " action_spec = array_spec.BoundedArraySpec(\n", " shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')\n", " observation_spec = array_spec.BoundedArraySpec(\n", " shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')\n", " super(SimplePyEnvironment, self).__init__(observation_spec, action_spec)\n", "\n", " def _observe(self):\n", " self._observation = np.random.randint(-2, 3, (1,), dtype='int32')\n", " return self._observation\n", "\n", " def _apply_action(self, action):\n", " return action * self._observation" ] }, { "cell_type": "markdown", "metadata": { "id": "ipEQgYDIf55t" }, "source": [ "Now we can use this environment to get observations, and receive rewards for our actions." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.670312Z", "iopub.status.busy": "2024-03-09T12:24:37.670033Z", "iopub.status.idle": "2024-03-09T12:24:37.676791Z", "shell.execute_reply": "2024-03-09T12:24:37.675965Z" }, "id": "Eo_uwSz2gAKX" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "observation: -2\n", "action: 2\n", "reward: -4.000000\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/tmpfs/tmp/ipykernel_30068/1543604332.py:3: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)\n", " print(\"observation: %d\" % observation)\n", "/tmpfs/tmp/ipykernel_30068/1543604332.py:9: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)\n", " print(\"reward: %f\" % reward)\n" ] } ], "source": [ "environment = SimplePyEnvironment()\n", "observation = environment.reset().observation\n", "print(\"observation: %d\" % observation)\n", "\n", "action = 2 #@param\n", "\n", "print(\"action: %d\" % action)\n", "reward = environment.step(action).reward\n", "print(\"reward: %f\" % reward)" ] }, { "cell_type": "markdown", "metadata": { "id": "GuVYHI8aDgCx" }, "source": [ "## TF Environments" ] }, { "cell_type": "markdown", "metadata": { "id": "dP46VwLTDnOR" }, "source": [ "One can define a bandit environment by subclassing `BanditTFEnvironment`, or, similarly to RL environments, one can define a `BanditPyEnvironment` and wrap it with `TFPyEnvironment`. For the sake of simplicity, we go with the latter option in this tutorial." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.680215Z", "iopub.status.busy": "2024-03-09T12:24:37.679501Z", "iopub.status.idle": "2024-03-09T12:24:37.689467Z", "shell.execute_reply": "2024-03-09T12:24:37.688570Z" }, "id": "IPPpwSi3EtWz" }, "outputs": [], "source": [ "tf_environment = tf_py_environment.TFPyEnvironment(environment)" ] }, { "cell_type": "markdown", "metadata": { "id": "-S9fhxF9GUaT" }, "source": [ "# Policies" ] }, { "cell_type": "markdown", "metadata": { "id": "NbTt5jnuGlYj" }, "source": [ "A *policy* in a bandit problem works the same way as in an RL problem: it provides an action (or a distribution of actions), given an observation as input.\n", "\n", "For more details, see the [TF-Agents Policy tutorial](https://github.com/tensorflow/agents/blob/master/docs/tutorials/3_policies_tutorial.ipynb).\n", "\n", "As with environments, there are two ways to construct a policy: One can create a `PyPolicy` and wrap it with `TFPyPolicy`, or directly create a `TFPolicy`. Here we elect to go with the direct method.\n", "\n", "Since this example is quite simple, we can define the optimal policy manually. The action only depends on the sign of the observation, 0 when is negative and 2 when is positive." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.693269Z", "iopub.status.busy": "2024-03-09T12:24:37.692551Z", "iopub.status.idle": "2024-03-09T12:24:37.698715Z", "shell.execute_reply": "2024-03-09T12:24:37.697874Z" }, "id": "VpMZlplNK5ND" }, "outputs": [], "source": [ "class SignPolicy(tf_policy.TFPolicy):\n", " def __init__(self):\n", " observation_spec = tensor_spec.BoundedTensorSpec(\n", " shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)\n", " time_step_spec = ts.time_step_spec(observation_spec)\n", "\n", " action_spec = tensor_spec.BoundedTensorSpec(\n", " shape=(), dtype=tf.int32, minimum=0, maximum=2)\n", "\n", " super(SignPolicy, self).__init__(time_step_spec=time_step_spec,\n", " action_spec=action_spec)\n", " def _distribution(self, time_step):\n", " pass\n", "\n", " def _variables(self):\n", " return ()\n", "\n", " def _action(self, time_step, policy_state, seed):\n", " observation_sign = tf.cast(tf.sign(time_step.observation[0]), dtype=tf.int32)\n", " action = observation_sign + 1\n", " return policy_step.PolicyStep(action, policy_state)" ] }, { "cell_type": "markdown", "metadata": { "id": "GAM7hb4LVQ70" }, "source": [ "Now we can request an observation from the environment, call the policy to choose an action, then the environment will output the reward:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:37.702407Z", "iopub.status.busy": "2024-03-09T12:24:37.701777Z", "iopub.status.idle": "2024-03-09T12:24:40.445911Z", "shell.execute_reply": "2024-03-09T12:24:40.445182Z" }, "id": "Z0_5vMDCVZWT" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Observation:\n", "tf.Tensor([[-1]], shape=(1, 1), dtype=int32)\n", "Action:\n", "tf.Tensor([0], shape=(1,), dtype=int32)\n", "Reward:\n", "tf.Tensor([[0.]], shape=(1, 1), dtype=float32)\n" ] } ], "source": [ "sign_policy = SignPolicy()\n", "\n", "current_time_step = tf_environment.reset()\n", "print('Observation:')\n", "print (current_time_step.observation)\n", "action = sign_policy.action(current_time_step).action\n", "print('Action:')\n", "print (action)\n", "reward = tf_environment.step(action).reward\n", "print('Reward:')\n", "print(reward)" ] }, { "cell_type": "markdown", "metadata": { "id": "AExuQ7u0-PF6" }, "source": [ "The way bandit environments are implemented ensures that every time we take a step, we not only receive the reward for the action we took, but also the next observation." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.449191Z", "iopub.status.busy": "2024-03-09T12:24:40.448931Z", "iopub.status.idle": "2024-03-09T12:24:40.456703Z", "shell.execute_reply": "2024-03-09T12:24:40.456059Z" }, "id": "CiB935of-wVv" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Reward: \n", "tf.Tensor([[0.]], shape=(1, 1), dtype=float32)\n", "Next observation:\n", "tf.Tensor([[1]], shape=(1, 1), dtype=int32)\n" ] } ], "source": [ "step = tf_environment.reset()\n", "action = 1\n", "next_step = tf_environment.step(action)\n", "reward = next_step.reward\n", "next_observation = next_step.observation\n", "print(\"Reward: \")\n", "print(reward)\n", "print(\"Next observation:\")\n", "print(next_observation)" ] }, { "cell_type": "markdown", "metadata": { "id": "zFnqVHfeANZP" }, "source": [ "# Agents" ] }, { "cell_type": "markdown", "metadata": { "id": "1pDK_faXAPSA" }, "source": [ "Now that we have bandit environments and bandit policies, it is time to also define bandit agents, that take care of changing the policy based on training samples.\n", "\n", "The API for bandit agents does not differ from that of RL agents: the agent just needs to implement the `_initialize` and `_train` methods, and define a `policy` and a `collect_policy`." ] }, { "cell_type": "markdown", "metadata": { "id": "TVCb-vPJOayG" }, "source": [ "## A More Complicated Environment" ] }, { "cell_type": "markdown", "metadata": { "id": "9Ksv7i7zPGSa" }, "source": [ "Before we write our bandit agent, we need to have an environment that is a bit harder to figure out. To spice up things just a little bit, the next environment will either always give `reward = observation * action` or `reward = -observation * action`. This will be decided when the environment is initialized." ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.459937Z", "iopub.status.busy": "2024-03-09T12:24:40.459670Z", "iopub.status.idle": "2024-03-09T12:24:40.468424Z", "shell.execute_reply": "2024-03-09T12:24:40.467827Z" }, "id": "fte7-Mr8O0QR" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "reward sign:\n", "1\n" ] } ], "source": [ "class TwoWayPyEnvironment(BanditPyEnvironment):\n", "\n", " def __init__(self):\n", " action_spec = array_spec.BoundedArraySpec(\n", " shape=(), dtype=np.int32, minimum=0, maximum=2, name='action')\n", " observation_spec = array_spec.BoundedArraySpec(\n", " shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation')\n", "\n", " # Flipping the sign with probability 1/2.\n", " self._reward_sign = 2 * np.random.randint(2) - 1\n", " print(\"reward sign:\")\n", " print(self._reward_sign)\n", "\n", " super(TwoWayPyEnvironment, self).__init__(observation_spec, action_spec)\n", "\n", " def _observe(self):\n", " self._observation = np.random.randint(-2, 3, (1,), dtype='int32')\n", " return self._observation\n", "\n", " def _apply_action(self, action):\n", " return self._reward_sign * action * self._observation[0]\n", "\n", "two_way_tf_environment = tf_py_environment.TFPyEnvironment(TwoWayPyEnvironment())" ] }, { "cell_type": "markdown", "metadata": { "id": "7Zb4jWpQUA75" }, "source": [ "## A More Complicated Policy" ] }, { "cell_type": "markdown", "metadata": { "id": "Dz2rEEA1USJu" }, "source": [ "A more complicated environment calls for a more complicated policy. We need a policy that detects the behavior of the underlying environment. There are three situations that the policy needs to handle:\n", "\n", "0. The agent has not detected know yet which version of the environment is running.\n", "1. The agent detected that the original version of the environment is running.\n", "2. The agent detected that the flipped version of the environment is running.\n", "\n", "We define a `tf_variable` named `_situation` to store this information encoded as values in `[0, 2]`, then make the policy behave accordingly." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.471536Z", "iopub.status.busy": "2024-03-09T12:24:40.471304Z", "iopub.status.idle": "2024-03-09T12:24:40.478597Z", "shell.execute_reply": "2024-03-09T12:24:40.478005Z" }, "id": "Srm2jsGHVM8N" }, "outputs": [], "source": [ "class TwoWaySignPolicy(tf_policy.TFPolicy):\n", " def __init__(self, situation):\n", " observation_spec = tensor_spec.BoundedTensorSpec(\n", " shape=(1,), dtype=tf.int32, minimum=-2, maximum=2)\n", " action_spec = tensor_spec.BoundedTensorSpec(\n", " shape=(), dtype=tf.int32, minimum=0, maximum=2)\n", " time_step_spec = ts.time_step_spec(observation_spec)\n", " self._situation = situation\n", " super(TwoWaySignPolicy, self).__init__(time_step_spec=time_step_spec,\n", " action_spec=action_spec)\n", " def _distribution(self, time_step):\n", " pass\n", "\n", " def _variables(self):\n", " return [self._situation]\n", "\n", " def _action(self, time_step, policy_state, seed):\n", " sign = tf.cast(tf.sign(time_step.observation[0, 0]), dtype=tf.int32)\n", " def case_unknown_fn():\n", " # Choose 1 so that we get information on the sign.\n", " return tf.constant(1, shape=(1,))\n", "\n", " # Choose 0 or 2, depending on the situation and the sign of the observation.\n", " def case_normal_fn():\n", " return tf.constant(sign + 1, shape=(1,))\n", " def case_flipped_fn():\n", " return tf.constant(1 - sign, shape=(1,))\n", "\n", " cases = [(tf.equal(self._situation, 0), case_unknown_fn),\n", " (tf.equal(self._situation, 1), case_normal_fn),\n", " (tf.equal(self._situation, 2), case_flipped_fn)]\n", " action = tf.case(cases, exclusive=True)\n", " return policy_step.PolicyStep(action, policy_state)" ] }, { "cell_type": "markdown", "metadata": { "id": "r6PPdRQQbE3Q" }, "source": [ "## The Agent" ] }, { "cell_type": "markdown", "metadata": { "id": "pO8HpL0tUP32" }, "source": [ "Now it's time to define the agent that detects the sign of the environment and sets the policy appropriately." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.481686Z", "iopub.status.busy": "2024-03-09T12:24:40.481452Z", "iopub.status.idle": "2024-03-09T12:24:40.494419Z", "shell.execute_reply": "2024-03-09T12:24:40.493839Z" }, "id": "7f-0W0cMbS_z" }, "outputs": [], "source": [ "class SignAgent(tf_agent.TFAgent):\n", " def __init__(self):\n", " self._situation = tf.Variable(0, dtype=tf.int32)\n", " policy = TwoWaySignPolicy(self._situation)\n", " time_step_spec = policy.time_step_spec\n", " action_spec = policy.action_spec\n", " super(SignAgent, self).__init__(time_step_spec=time_step_spec,\n", " action_spec=action_spec,\n", " policy=policy,\n", " collect_policy=policy,\n", " train_sequence_length=None)\n", "\n", " def _initialize(self):\n", " return tf.compat.v1.variables_initializer(self.variables)\n", "\n", " def _train(self, experience, weights=None):\n", " observation = experience.observation\n", " action = experience.action\n", " reward = experience.reward\n", "\n", " # We only need to change the value of the situation variable if it is\n", " # unknown (0) right now, and we can infer the situation only if the\n", " # observation is not 0.\n", " needs_action = tf.logical_and(tf.equal(self._situation, 0),\n", " tf.not_equal(reward, 0))\n", "\n", "\n", " def new_situation_fn():\n", " \"\"\"This returns either 1 or 2, depending on the signs.\"\"\"\n", " return (3 - tf.sign(tf.cast(observation[0, 0, 0], dtype=tf.int32) *\n", " tf.cast(action[0, 0], dtype=tf.int32) *\n", " tf.cast(reward[0, 0], dtype=tf.int32))) / 2\n", "\n", " new_situation = tf.cond(needs_action,\n", " new_situation_fn,\n", " lambda: self._situation)\n", " new_situation = tf.cast(new_situation, tf.int32)\n", " tf.compat.v1.assign(self._situation, new_situation)\n", " return tf_agent.LossInfo((), ())\n", "\n", "sign_agent = SignAgent()\n" ] }, { "cell_type": "markdown", "metadata": { "id": "oyclF0ZZpW-f" }, "source": [ "In the above code, the agent defines the policy, and the variable `situation` is shared by the agent and the policy.\n", "\n", "Also, the parameter `experience` of the `_train` function is a trajectory:" ] }, { "cell_type": "markdown", "metadata": { "id": "3NlF228LGoiR" }, "source": [ "# Trajectories" ] }, { "cell_type": "markdown", "metadata": { "id": "2GbBDi1iGsnN" }, "source": [ "In TF-Agents, `trajectories` are named tuples that contain samples from previous steps taken. These samples are then used by the agent to train and update the policy. In RL, trajectories must contain information about the current state, the next state, and whether the current episode has ended. Since in the Bandit world we do not need these things, we set up a helper function to create a trajectory:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.497578Z", "iopub.status.busy": "2024-03-09T12:24:40.497359Z", "iopub.status.idle": "2024-03-09T12:24:40.501622Z", "shell.execute_reply": "2024-03-09T12:24:40.501043Z" }, "id": "gdSG1nv-HUJq" }, "outputs": [], "source": [ "# We need to add another dimension here because the agent expects the\n", "# trajectory of shape [batch_size, time, ...], but in this tutorial we assume\n", "# that both batch size and time are 1. Hence all the expand_dims.\n", "\n", "def trajectory_for_bandit(initial_step, action_step, final_step):\n", " return trajectory.Trajectory(observation=tf.expand_dims(initial_step.observation, 0),\n", " action=tf.expand_dims(action_step.action, 0),\n", " policy_info=action_step.info,\n", " reward=tf.expand_dims(final_step.reward, 0),\n", " discount=tf.expand_dims(final_step.discount, 0),\n", " step_type=tf.expand_dims(initial_step.step_type, 0),\n", " next_step_type=tf.expand_dims(final_step.step_type, 0))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "zFEJ8kbI_e6Q" }, "source": [ "# Training an Agent" ] }, { "cell_type": "markdown", "metadata": { "id": "0Gh-41og_hDB" }, "source": [ "Now all the pieces are ready for training our bandit agent." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.504513Z", "iopub.status.busy": "2024-03-09T12:24:40.504292Z", "iopub.status.idle": "2024-03-09T12:24:40.931144Z", "shell.execute_reply": "2024-03-09T12:24:40.930371Z" }, "id": "LPx43dZgoyKg" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n", "Trajectory(\n", "{'step_type': ,\n", " 'observation': ,\n", " 'action': ,\n", " 'policy_info': (),\n", " 'next_step_type': ,\n", " 'reward': ,\n", " 'discount': })\n" ] } ], "source": [ "step = two_way_tf_environment.reset()\n", "for _ in range(10):\n", " action_step = sign_agent.collect_policy.action(step)\n", " next_step = two_way_tf_environment.step(action_step.action)\n", " experience = trajectory_for_bandit(step, action_step, next_step)\n", " print(experience)\n", " sign_agent.train(experience)\n", " step = next_step\n" ] }, { "cell_type": "markdown", "metadata": { "id": "4iVSNiYdy4U4" }, "source": [ "From the output one can see that after the second step (unless the observation was 0 in the first step), the policy chooses the action in the right way and thus the reward collected is always non-negative." ] }, { "cell_type": "markdown", "metadata": { "id": "RCKyKEjOlOPE" }, "source": [ "# A Real Contextual Bandit Example" ] }, { "cell_type": "markdown", "metadata": { "id": "ecnQwUpmllar" }, "source": [ "In the rest of this tutorial, we use the pre-implemented [environments](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/) and [agents](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/agents/) of the TF-Agents Bandits library." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:40.934639Z", "iopub.status.busy": "2024-03-09T12:24:40.934387Z", "iopub.status.idle": "2024-03-09T12:24:41.336247Z", "shell.execute_reply": "2024-03-09T12:24:41.335550Z" }, "id": "oEnXUwd-nZKl" }, "outputs": [], "source": [ "# Imports for example.\n", "from tf_agents.bandits.agents import lin_ucb_agent\n", "from tf_agents.bandits.environments import stationary_stochastic_py_environment as sspe\n", "from tf_agents.bandits.metrics import tf_metrics\n", "from tf_agents.drivers import dynamic_step_driver\n", "from tf_agents.replay_buffers import tf_uniform_replay_buffer\n", "\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "metadata": { "id": "37oy70dUmmie" }, "source": [ "## Stationary Stochastic Environment with Linear Payoff Functions" ] }, { "cell_type": "markdown", "metadata": { "id": "euPPd8x1m7iG" }, "source": [ "The environment used in this example is the [StationaryStochasticPyEnvironment](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/environments/stationary_stochastic_py_environment.py). This environment takes as parameter a (usually noisy) function for giving observations (context), and for every arm takes an (also noisy) function that computes the reward based on the given observation. In our example, we sample the context uniformly from a d-dimensional cube, and the reward functions are linear functions of the context, plus some Gaussian noise." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:41.340797Z", "iopub.status.busy": "2024-03-09T12:24:41.340075Z", "iopub.status.idle": "2024-03-09T12:24:41.347330Z", "shell.execute_reply": "2024-03-09T12:24:41.346749Z" }, "id": "gVa0hmQrpe6w" }, "outputs": [], "source": [ "batch_size = 2 # @param\n", "arm0_param = [-3, 0, 1, -2] # @param\n", "arm1_param = [1, -2, 3, 0] # @param\n", "arm2_param = [0, 0, 1, 1] # @param\n", "def context_sampling_fn(batch_size):\n", " \"\"\"Contexts from [-10, 10]^4.\"\"\"\n", " def _context_sampling_fn():\n", " return np.random.randint(-10, 10, [batch_size, 4]).astype(np.float32)\n", " return _context_sampling_fn\n", "\n", "class LinearNormalReward(object):\n", " \"\"\"A class that acts as linear reward function when called.\"\"\"\n", " def __init__(self, theta, sigma):\n", " self.theta = theta\n", " self.sigma = sigma\n", " def __call__(self, x):\n", " mu = np.dot(x, self.theta)\n", " return np.random.normal(mu, self.sigma)\n", "\n", "arm0_reward_fn = LinearNormalReward(arm0_param, 1)\n", "arm1_reward_fn = LinearNormalReward(arm1_param, 1)\n", "arm2_reward_fn = LinearNormalReward(arm2_param, 1)\n", "\n", "environment = tf_py_environment.TFPyEnvironment(\n", " sspe.StationaryStochasticPyEnvironment(\n", " context_sampling_fn(batch_size),\n", " [arm0_reward_fn, arm1_reward_fn, arm2_reward_fn],\n", " batch_size=batch_size))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "haID-SPgsLyY" }, "source": [ "## The LinUCB Agent" ] }, { "cell_type": "markdown", "metadata": { "id": "298-1Q0bsQmR" }, "source": [ "The agent below implements the [LinUCB](http://rob.schapire.net/papers/www10.pdf) algorithm." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:41.350701Z", "iopub.status.busy": "2024-03-09T12:24:41.350146Z", "iopub.status.idle": "2024-03-09T12:24:41.383040Z", "shell.execute_reply": "2024-03-09T12:24:41.382431Z" }, "id": "p4XmGgIusj-K" }, "outputs": [], "source": [ "observation_spec = tensor_spec.TensorSpec([4], tf.float32)\n", "time_step_spec = ts.time_step_spec(observation_spec)\n", "action_spec = tensor_spec.BoundedTensorSpec(\n", " dtype=tf.int32, shape=(), minimum=0, maximum=2)\n", "\n", "agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec,\n", " action_spec=action_spec)" ] }, { "cell_type": "markdown", "metadata": { "id": "Eua_aC7Rt78G" }, "source": [ "## Regret Metric" ] }, { "cell_type": "markdown", "metadata": { "id": "FBJDiJvEt-xC" }, "source": [ "Bandits' most important metric is *regret*, calculated as the difference between the reward collected by the agent and the expected reward of an oracle policy that has access to the reward functions of the environment. The [RegretMetric](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/metrics/tf_metrics.py) thus needs a *baseline_reward_fn* function that calculates the best achievable expected reward given an observation. For our example, we need to take the maximum of the no-noise equivalents of the reward functions that we already defined for the environment." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:41.386381Z", "iopub.status.busy": "2024-03-09T12:24:41.386147Z", "iopub.status.idle": "2024-03-09T12:24:41.391245Z", "shell.execute_reply": "2024-03-09T12:24:41.390620Z" }, "id": "cX7MiFhNu3_L" }, "outputs": [], "source": [ "def compute_optimal_reward(observation):\n", " expected_reward_for_arms = [\n", " tf.linalg.matvec(observation, tf.cast(arm0_param, dtype=tf.float32)),\n", " tf.linalg.matvec(observation, tf.cast(arm1_param, dtype=tf.float32)),\n", " tf.linalg.matvec(observation, tf.cast(arm2_param, dtype=tf.float32))]\n", " optimal_action_reward = tf.reduce_max(expected_reward_for_arms, axis=0)\n", " return optimal_action_reward\n", "\n", "regret_metric = tf_metrics.RegretMetric(compute_optimal_reward)" ] }, { "cell_type": "markdown", "metadata": { "id": "YRWz-Qeb13JC" }, "source": [ "## Training" ] }, { "cell_type": "markdown", "metadata": { "id": "khdKjTs516Pg" }, "source": [ "Now we put together all the components that we introduced above: the environment, the policy, and the agent. We run the policy on the environment and output training data with the help of a *driver*, and train the agent on the data.\n", "\n", "Note that there are two parameters that together specify the number of steps taken. `num_iterations` specifies how many times we run the trainer loop, while the driver will take `steps_per_loop` steps per iteration. The main reason behind keeping both of these parameters is that some operations are done per iteration, while some are done by the driver in every step. For example, the agent's `train` function is only called once per iteration. The trade-off here is that if we train more often then our policy is \"fresher\", on the other hand, training in bigger batches might be more time efficient." ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2024-03-09T12:24:41.394662Z", "iopub.status.busy": "2024-03-09T12:24:41.394133Z", "iopub.status.idle": "2024-03-09T12:24:49.534485Z", "shell.execute_reply": "2024-03-09T12:24:49.533825Z" }, "id": "4Ggn45g62DWx" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/tmp/ipykernel_30068/3138849230.py:21: ReplayBuffer.gather_all (from tf_agents.replay_buffers.replay_buffer) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use `as_dataset(..., single_deterministic_pass=True)` instead.\n" ] }, { "data": { "text/plain": [ "Text(0.5, 0, 'Number of Iterations')" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjIAAAGwCAYAAACzXI8XAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/H5lhTAAAACXBIWXMAAA9hAAAPYQGoP6dpAABqGUlEQVR4nO3dd3iT5foH8O+b0XS3dLfQ0rL33iBDOCCoDDlOVBzH4wAFEQd6PCoOkONWjvxciIrziIoLZAmyoey9aaGlpXTvjPf3R/K+TdqkzZukJCnfz3X10ma0T0mb3Lmf+74fQRRFEURERER+SOXtBRARERG5ioEMERER+S0GMkREROS3GMgQERGR32IgQ0RERH6LgQwRERH5LQYyRERE5Lc03l5AYzOZTMjKykJYWBgEQfD2coiIiMgJoiiipKQESUlJUKkc512afCCTlZWF5ORkby+DiIiIXJCZmYkWLVo4vL7JBzJhYWEAzP8Q4eHhXl4NEREROaO4uBjJycny67gjTT6QkbaTwsPDGcgQERH5mYbKQljsS0RERH6LgQwRERH5LQYyRERE5LcYyBAREZHfYiBDREREfouBDBEREfktBjJERETktxjIEBERkd9iIENERER+i4EMERER+S0GMkREROS3GMgQERGR32ryh0Y2lqJyPYor9QgP1CIiWOvt5RAREV2RmJFx0bzfD+OqBevw+dYz3l4KERHRFYuBjIs0avOx4nqj6OWVEBERXbkYyLhIozL/0xlMJi+vhIiI6MrFQMZFGpU5I2NgRoaIiMhrGMi4SKOWMjIMZIiIiLzFq4HMvHnz0LdvX4SFhSEuLg4TJ07E0aNHbW5TWVmJadOmITo6GqGhoZg8eTJycnK8tOIaWrWUkeHWEhERkbd4NZBZv349pk2bhq1bt2LVqlXQ6/UYPXo0ysrK5Ns8+uij+Pnnn/Hdd99h/fr1yMrKwg033ODFVZtJNTJ6ZmSIiIi8xqtzZFasWGHz+aeffoq4uDikp6dj6NChKCoqwscff4wvv/wSV199NQBg8eLF6NixI7Zu3YoBAwbU+ZpVVVWoqqqSPy8uLm6UtWuYkSEiIvI6n6qRKSoqAgBERUUBANLT06HX6zFq1Cj5Nh06dEBKSgq2bNli92vMmzcPERER8kdycnKjrLVma4kZGSIiIm/xmUDGZDJh5syZGDx4MLp06QIAuHDhAgICAhAZGWlz2/j4eFy4cMHu15kzZw6Kiorkj8zMzEZZL7eWiIiIvM9njiiYNm0aDhw4gI0bN7r1dXQ6HXQ6nYdW5Zi0tWTkHBkiIiKv8YmMzPTp0/HLL79g3bp1aNGihXx5QkICqqurUVhYaHP7nJwcJCQkXOZV2pIzMtxaIiIi8hqvBjKiKGL69On44YcfsHbtWqSlpdlc37t3b2i1WqxZs0a+7OjRo8jIyMDAgQMv93JtsNiXiIjI+7y6tTRt2jR8+eWX+OmnnxAWFibXvURERCAoKAgRERG49957MWvWLERFRSE8PBwPP/wwBg4caLdj6XKSi31ZI0NEROQ1Xg1k3n//fQDA8OHDbS5fvHgx7rrrLgDAm2++CZVKhcmTJ6OqqgpjxozBf//738u80rpqtpaYkSEiIvIWrwYyothwNiMwMBALFy7EwoULL8OKnMf2ayIiIu/ziWJff6RW8awlIiIib2Mg4yK52Jft10RERF7DQMZFWikjw60lIiIir2Eg4yIpI8NiXyIiIu9hIOMitl8TERF5HwMZF2m4tUREROR1DGRcpFZxa4mIiMjbGMi4SKs2/9MZubVERETkNQxkXMRiXyIiIu9jIOMiLQfiEREReR0DGRdpeEQBERGR1zGQcZG8tcTJvkRERF7DQMZF0taSKLLgl4iIyFsYyLhIbcnIADxviYiIyFsYyLhIysgArJMhIiLyFgYyLtJYZ2QYyBAREXkFAxkXaVQ1gQwLfomIiLyDgYyLBEGQgxlmZIiIiLyDgYwbON2XiIjIuxjIuEE6AZvt10RERN7BQMYN8nRf1sgQERF5BQMZN0gZGT1rZIiIiLyCgYwbtDxviYiIyKsYyLiB5y0RERF5FwMZN0jTfZmRISIi8g4GMm5Qy3NkmJEhIiLyBgYybtCoLRkZtl8TERF5BQMZN2jZfk1ERORVDGTcIB1RwPZrIiIi72Ag4wZ5a4mBDBERkVcwkHEDt5aIiIi8i4GMGzjZl4iIyLsYyLhBqpExMiNDRETkFQxk3CBP9mVGhoiIyCsYyLihpthXWUYmM78ck9/fjN/3ZzfGsoiIiK4YDGTcoJUm+yociLf+2EWkny3A97vON8ayiIiIrhgMZNwgZWSUbi1VGUyW+7G2hoiIyB0MZNwgt18rDEiqDEbz/VgkTERE5BYGMm5Qu7i1VKWXMjIsEiYiInIHAxk3SHNklGZWpK0lnppNRETkHgYybqjZWlJaIyNtLTEjQ0RE5A4GMm5wtdi32sCtJSIiIk9gIOOGmvZrbi0RERF5AwMZN7jbfs2tJSIiIvcwkHGD2sWzlqr05hoZzpEhIiJyDwMZN7he7CttLTEjQ0RE5A4GMm6Q2q/1SufIcCAeERGRRzCQcYPrk33ZtUREROQJDGTc4G77NbuWiIiI3MNAxg0aN9uvlW5JERERkS0GMm7QuDvZlxkZIiIitzCQcYPLZy1ZDo00iYCJWRkiIiKXMZBxg7vt1wCgZ+cSERGRyxjIuMHd9muAs2SIiIjcwUDGDRoX2q9FUZS7lsz3ZSBDRETkKgYybtBa2q+VBCMGkwjrBA63loiIiFzHQMYNUvu1kmDEuj4GYEaGiIjIHQxk3CBtLRkV1MhIB0ZKeHAkERGR6xjIuEFuv1aQVamTkWH7NRERkcsYyLhBysgoyarU3VpiRoaIiMhVDGTcIBf7KsiqVNcKZHhwJBERkesYyLhBLvZVlJGxrZFROhWYiIiIajCQcYMr7de1t5aYkSEiInIdAxk3qFWudC2xRoaIiMhTGMi4QS72VTRHpvbWEjMyRERErvJqILNhwwZcf/31SEpKgiAI+PHHH22uv+uuuyAIgs3HNddc453F2qG1tF+LovNZmbpbS8zIEBERucqrgUxZWRm6d++OhQsXOrzNNddcg+zsbPnjq6++uowrrJ+UkQGcD0hqdy1xsi8REZHrNN785mPHjsXYsWPrvY1Op0NCQoLTX7OqqgpVVVXy58XFxS6vryFSsS/g/BYRu5aIiIg8x+drZP7880/ExcWhffv2ePDBB3Hp0qV6bz9v3jxERETIH8nJyY22Nqn9GnC+aJddS0RERJ7j04HMNddcg88++wxr1qzBq6++ivXr12Ps2LEwGo0O7zNnzhwUFRXJH5mZmY22PrXKemvJyYxM7a4lZmSIiIhc5tWtpYbccsst8v937doV3bp1Q+vWrfHnn39i5MiRdu+j0+mg0+kuy/oEQYBGJcBgEhUU+9Y6NNLAjAwREZGrfDojU1urVq0QExODEydOeHspMqXnLdXZWmJGhoiIyGV+FcicO3cOly5dQmJioreXIpNasJ0t9mXXEhERked4dWuptLTUJrty+vRp7NmzB1FRUYiKisILL7yAyZMnIyEhASdPnsQTTzyBNm3aYMyYMV5ctS0pI+N6sS8zMkRERK7yaiCzc+dOjBgxQv581qxZAICpU6fi/fffx759+7BkyRIUFhYiKSkJo0ePxosvvnjZamCcobG0YDtd7MvJvkRERB7j1UBm+PDhEEXHL+QrV668jKtxjdbSueRs9xHPWiIiIvIcv6qR8UVqaWvJ5SMKmJEhIiJyFQMZN8nFvgq3lgK1UpEwMzJERESuYiDjJleLfUN1Gsv9mJEhIiJyFQMZN2ksGRm9wq2lEEsgw60lIiIi1zGQcZPWxYxMSIAlI8OtJSIiIpcxkHGT4vZrvblGJkSnVnQ/IiIiqouBjJukgyOdPWuputbWEtuviYiIXMdAxk3y1pKzc2RqBzIciEdEROQyBjJukot9nW6/tnQtBUjFvszIEBERuYqBjJuUF/tKNTJsvyYiInIXAxk3udp+HWop9mXXEhERkesYyLhJyUA8URTlYt9gzpEhIiJyGwMZN2kUdC1Zn7NUU+zLjAwREZGrGMi4SckcGetAJpRzZIiIiNzGQMZNSop9pW0lQQCCtJwjQ0RE5C4GMm5SUuwrdSzpNCoEaKT5M8zIEBERuYqBjJuUFPtKW0sBapXi+TNERERUFwMZN2ktNTLOZFaq9OZARqdVKwqAiIiIyD4GMm6SzlpyZkKv9daSkgCIiIiI7GMg4yatC+3XOo1KbtvmEQVERESuYyDjJlfar3UadU1GhjUyRERELmMg4yYltS5S+7VOq6q5HwfiERERuYyBjJu0KgXFvpYaGXYtEREReQYDGTdJmRWnin2tupaUnppNREREdTGQcZNGQa2LTbGvWtmp2URERFSX4kAmIyMDolj3xVcURWRkZHhkUf5E6j5SsrWk06jkbidmZIiIiFynOJBJS0vDxYsX61yen5+PtLQ0jyzKn9QEMs5P9tVp1HJGxiQCJmZliIiIXKI4kBFFEYIg1Lm8tLQUgYGBHlmUP1HSRm2vawkA9OxcIiIiconG2RvOmjULACAIAp599lkEBwfL1xmNRmzbtg09evTw+AJ9naJiX6uuJanbCTAHQTqnHwkiIiKSOP3yuXv3bgDmjMz+/fsREBAgXxcQEIDu3btj9uzZnl+hj9Moab/W28/IcCgeERGRa5wOZNatWwcAuPvuu/H2228jPDy80RblT5S0UdvUyKi4tUREROQuxTUyixcvRnh4OE6cOIGVK1eioqICAOx2Ml0J1C52LQmCUFMozIwMERGRSxQHMvn5+Rg5ciTatWuHcePGITs7GwBw77334rHHHvP4An2dkmJf6zkygLL6GiIiIqpLcSAzc+ZMaLVaZGRk2BT83nzzzVixYoVHF+cP5FOsndgequlaUgNQdrwBERER1aW4V+aPP/7AypUr0aJFC5vL27Zti7Nnz3psYf7Cpcm+atuMDIfiERERuUZxRqasrMwmEyPJz8+HTqfzyKL8ibJiX0uNjFYKZHhwJBERkTsUBzJXXXUVPvvsM/lzQRBgMpmwYMECjBgxwqOL8wfyKdZK2q8tNTJaBVOBiYiIqC7FW0sLFizAyJEjsXPnTlRXV+OJJ57AwYMHkZ+fj02bNjXGGn2aku0h6/Zr832ZkSEiInKH4oxMly5dcOzYMQwZMgQTJkxAWVkZbrjhBuzevRutW7dujDX6NFcPjQRYI0NEROQuRRkZvV6Pa665BosWLcIzzzzTWGvyK66etQSwa4mIiMhdijIyWq0W+/bta6y1+CU5q6Lg9OsAtbS1xDkyRERE7lC8tXT77bfj448/boy1+CW52NcoNjjduKpWRoY1MkRERO5RXOxrMBjwySefYPXq1ejduzdCQkJsrn/jjTc8tjh/oLU6/NFoEm0Og6ytSm9bIyN3LTEjQ0RE5BLFgcyBAwfQq1cvAMCxY8dsrhMExy/iTZWUVQHMtS6WhiS76nYtSVOBmZEhIiJyheJARjoFm8ysT7Gur2jXaBLl6+WMjFwozIwMERGRKxTXyJAtm0CmnoBE6lgCrGpkePo1ERGRWxRnZCZNmmR3C0kQBAQGBqJNmza47bbb0L59e48s0NeprQKZ+op2pRkyABCgrlXsy8m+RERELlGckYmIiMDatWuxa9cuCIIAQRCwe/durF27FgaDAd988w26d+9+xUz5FQSh5rylegISqT5GrRLkAKbmnCZmZIiIiFyhOCOTkJCA2267De+99x5UltZjk8mEGTNmICwsDF9//TUeeOABPPnkk9i4caPHF+yLNCoV9EZjvQFJ7XOWpPsBnCNDRETkKsUZmY8//hgzZ86UgxgAUKlUePjhh/HBBx9AEARMnz4dBw4c8OhCfZkzg+1qH09gfT9O9iUiInKN4kDGYDDgyJEjdS4/cuQIjEbzi3VgYOAV1YotFe0a6wlIardeA1ZHFDAjQ0RE5BLFW0t33HEH7r33Xjz99NPo27cvAGDHjh145ZVXcOeddwIA1q9fj86dO3t2pT7MmQm9taf6mu8nZXKYkSEiInKF4kDmzTffRHx8PBYsWICcnBwAQHx8PB599FE8+eSTAIDRo0fjmmuu8exKfZg8obfeYl9ztirAaoCePEeGXUtEREQuURzIqNVqPPPMM3jmmWdQXFwMAAgPD7e5TUpKimdW5ydczshwjgwREZFbXBqIZzAYsHr1anz11VdyLUxWVhZKS0s9ujh/IRft1lfsq69bI8NDI4mIiNyjOCNz9uxZXHPNNcjIyEBVVRX+9re/ISwsDK+++iqqqqqwaNGixlinT5OLdust9q3bteTM/BkiIiJyTHFGZsaMGejTpw8KCgoQFBQkXz5p0iSsWbPGo4vzF2pVw23UNV1L9ubIMCNDRETkCsUZmb/++gubN29GQECAzeWpqak4f/68xxbmT7RObC1V22m/dmZLioiIiBxTnJExmUzyvBhr586dQ1hYmEcW5W+UFPsGWGVkAtQNb0kRERGRY4oDmdGjR+Ott96SPxcEAaWlpXjuuecwbtw4T67Nb2gUtF/bm+zLIwqIiIhco3hr6fXXX8eYMWPQqVMnVFZW4rbbbsPx48cRExODr776qjHW6PPkeTDOnLVkMxCv4fsRERGRY4oDmRYtWmDv3r345ptvsHfvXpSWluLee+/FlClTbIp/ryTOnbVk74gCdi0RERG5Q3EgAwAajQZTpkzBlClT5Muys7Px+OOP47333vPY4vyFxsX2a86RISIico+iQObgwYNYt24dAgICcNNNNyEyMhJ5eXl4+eWXsWjRIrRq1aqx1unTNE60X9vrWuIcGSIiIvc4Xey7fPly9OzZE4888ggeeOAB9OnTB+vWrUPHjh1x+PBh/PDDDzh48KCib75hwwZcf/31SEpKgiAI+PHHH22uF0UR//73v5GYmIigoCCMGjUKx48fV/Q9LgenJvva6VriHBkiIiL3OB3IvPTSS5g2bRqKi4vxxhtv4NSpU3jkkUfw22+/YcWKFS4dEllWVobu3btj4cKFdq9fsGAB3nnnHSxatAjbtm1DSEgIxowZg8rKSsXfqzE5VexrbyAe58gQERG5xelA5ujRo5g2bRpCQ0Px8MMPQ6VS4c0330Tfvn1d/uZjx47FSy+9hEmTJtW5ThRFvPXWW/jXv/6FCRMmoFu3bvjss8+QlZVVJ3PjbdLWkr6+9mu9pUZGa++IAmZkiIiIXOF0IFNSUiKfcq1WqxEUFNSoNTGnT5/GhQsXMGrUKPmyiIgI9O/fH1u2bHF4v6qqKhQXF9t8NDZn2qjtdS1xa4mIiMg9iop9V65ciYiICADmCb9r1qzBgQMHbG4zfvx4jyzswoULAID4+Hiby+Pj4+Xr7Jk3bx5eeOEFj6zBWc4cUVDfQDxuLREREblGUSAzdepUm8/vv/9+m88FQbB7fMHlNGfOHMyaNUv+vLi4GMnJyY36PV09NFLLIwqIiIjc4nQgY7rMLcIJCQkAgJycHCQmJsqX5+TkoEePHg7vp9PpoNPpGnt5NpwJSKrtdi3xiAIiIiJ3KD5r6XJJS0tDQkIC1qxZI19WXFyMbdu2YeDAgV5cWV3OBCR2J/vyiAIiIiK3uDTZ11NKS0tx4sQJ+fPTp09jz549iIqKQkpKCmbOnImXXnoJbdu2RVpaGp599lkkJSVh4sSJ3lu0Hc4V+9btWtJwIB4REZFbvBrI7Ny5EyNGjJA/l2pbpk6dik8//RRPPPEEysrK8M9//hOFhYUYMmQIVqxYgcDAQG8t2S5nzkySD43kQDwiIiKP8WogM3z4cIii4xdxQRAwd+5czJ079zKuSjlnzkyyv7XEriUiIiJ3+GyNjD+RamSMrh4aya4lIiIil7gUyBQWFuKjjz7CnDlzkJ+fDwDYtWsXzp8/79HF+Qup1sVRsa8oilaHRlq1X6uYkSEiInKH4q2lffv2YdSoUYiIiMCZM2dw3333ISoqCsuWLUNGRgY+++yzxlinT2uo2NdgEiElXWwm+1ruZxIBk0mEyhLYEBERkXMUZ2RmzZqFu+66C8ePH7cpuh03bhw2bNjg0cX5i4aKfaX6GMB+1xJQ/zlNREREZJ/iQGbHjh11JvoCQPPmzes9OqApa6jYVzowEgAC1NZbSzX/z1kyREREyikOZHQ6nd2DGI8dO4bY2FiPLMrfaBuYByNlZALUKpvtI+uMDAMZIiIi5RQHMuPHj8fcuXOh1+sBmFukMzIy8OSTT2Ly5MkeX6A/UMuTfR1kZOwU+gI13U4At5aIiIhcoTiQef3111FaWoq4uDhUVFRg2LBhaNOmDcLCwvDyyy83xhp9njTYzlH7tb1zlgBzEKiRO5eYkSEiIlJKcddSREQEVq1ahY0bN2Lfvn0oLS1Fr169MGrUqMZYn19oaLCdvRkyEo1agMEk8uBIIiIiF7g82XfIkCEYMmSIJ9fitxos9pW2lrTqOtdpVSpUwlTvydlERERkn+JA5p133rF7uSAICAwMRJs2bTB06FCo1XVftJuqBtuv7ZyzJNHwmAIiIiKXKQ5k3nzzTVy8eBHl5eVo1qwZAKCgoADBwcEIDQ1Fbm4uWrVqhXXr1iE5OdnjC/ZFDQ3Eq39riQdHEhERuUpxse8rr7yCvn374vjx47h06RIuXbqEY8eOoX///nj77beRkZGBhIQEPProo42xXp8kH1HQQPu19VRfiTMnZxMREZF9ijMy//rXv/D999+jdevW8mVt2rTBa6+9hsmTJ+PUqVNYsGDBFdWKLR8a6SCr4qhrCbDOyDCQISIiUkpxRiY7OxsGg6HO5QaDQZ7sm5SUhJKSEvdX5yek9mtHp1g31LUEcGuJiIjIFYoDmREjRuD+++/H7t275ct2796NBx98EFdffTUAYP/+/UhLS/PcKn1cw+3XUtdS3X9u6ZgCzpEhIiJSTnEg8/HHHyMqKgq9e/eGTqeDTqdDnz59EBUVhY8//hgAEBoaitdff93ji/VVDRb76h3XyDRUX0NERESOKa6RSUhIwKpVq3DkyBEcO3YMANC+fXu0b99evs2IESM8t0I/INXIOC72bbhriRkZIiIi5VweiNehQwd06NDBk2vxW9oG268dz5GRu5ZY7EtERKSYS4HMuXPnsHz5cmRkZKC6utrmujfeeMMjC/MnarmFWoQoihAEweb6+ruWpGwOMzJERERKKQ5k1qxZg/Hjx6NVq1Y4cuQIunTpgjNnzkAURfTq1asx1ujzpGJfwHxwpEZtG8jUO0dGzuYwI0NERKSU4mLfOXPmYPbs2di/fz8CAwPx/fffIzMzE8OGDcONN97YGGv0eVKdCwC7ZybVWyPD06+JiIhcpjiQOXz4MO68804AgEajQUVFBUJDQzF37ly8+uqrHl+gP5CCEcD+YDu5a8lO+7U8EI9dS0RERIopDmRCQkLkupjExEScPHlSvi4vL89zK/MjWuuMjJ3MSv1bS8zIEBERuUpxjcyAAQOwceNGdOzYEePGjcNjjz2G/fv3Y9myZRgwYEBjrNHnqVUCBAEQRfuZlfq3lnhEARERkasUBzJvvPEGSktLAQAvvPACSktL8c0336Bt27ZXZMeSRKMSoDeKMNqtkWm4a8lebQ0RERHVT1EgYzQace7cOXTr1g2AeZtp0aJFjbIwf6NRqaA3GpVvLanYtUREROQqRTUyarUao0ePRkFBQWOtx2/VHP5ob2vJ8UA8HhpJRETkOsXFvl26dMGpU6caYy1+TZ4HY29rSW+pkbF3aKR8P2ZkiIiIlFIcyLz00kuYPXs2fvnlF2RnZ6O4uNjm40oln7dkJyNTza4lIiKiRqG42HfcuHEAgPHjx9uM4pdG8xuNRs+tzo/Ud95S/VtLUtcSAxkiIiKlFAcy69ata4x1+L2a7iPHNTL2upbkQyO5tURERKSY4kBm2LBhjbEOv6eu56iBeufIMCNDRETkMsU1MgDw119/4fbbb8egQYNw/vx5AMDnn3+OjRs3enRx/kRuo65njoxOW7dGRs7ksP2aiIhIMcWBzPfff48xY8YgKCgIu3btQlVVFQCgqKgIr7zyiscX6C8ctV+LomhV7Gtva8lxAERERET1c6lradGiRfjwww+h1WrlywcPHoxdu3Z5dHH+ROOg2FfKxgANzZFhRoaIiEgpxYHM0aNHMXTo0DqXR0REoLCw0BNr8kuOinZtAxl7W0uOu52IiIiofooDmYSEBJw4caLO5Rs3bkSrVq08sih/5GhCb7VVICPNjLHGriUiIiLXKQ5k7rvvPsyYMQPbtm2DIAjIysrC0qVLMXv2bDz44IONsUa/IJ1iXfvQSOuOJeu5O/L92LVERETkMsXt10899RRMJhNGjhyJ8vJyDB06FDqdDrNnz8bDDz/cGGv0C45qXeobhgdYTfZlRoaIiEgxxYGMIAh45pln8Pjjj+PEiRMoLS1Fp06dEBoa2hjr8xsaB91HVXrHrdfW92NGhoiISDnFW0tffPEFysvLERAQgE6dOqFfv35XfBADWJ+ZVDsj43gYHsA5MkRERO5QHMg8+uijiIuLw2233Ybffvvtij1bqTZHtS7Oby0xI0NERKSU4kAmOzsbX3/9NQRBwE033YTExERMmzYNmzdvboz1+Y2G2q8D7LReA9xaIiIicofiQEaj0eC6667D0qVLkZubizfffBNnzpzBiBEj0Lp168ZYo1+Qz1oy2W+/5tYSERGR5yku9rUWHByMMWPGoKCgAGfPnsXhw4c9tS6/43iyb/01Mlo1jyggIiJylUuHRpaXl2Pp0qUYN24cmjdvjrfeeguTJk3CwYMHPb0+v+Gw2LfBriUeUUBEROQqxRmZW265Bb/88guCg4Nx00034dlnn8XAgQMbY21+Ra51qTMQr6FiXx5RQERE5CrFgYxarca3336LMWPGQK22zTIcOHAAXbp08dji/Inb7dcciEdERKSY4kBm6dKlNp+XlJTgq6++wkcffYT09PQrth3b0VlLlXqpa8lBIMOuJSIiIpe5VCMDABs2bMDUqVORmJiI1157DVdffTW2bt3qybX5FbU82dc2s3KptAoAEBUcYPd+jjI5RERE1DBFGZkLFy7g008/xccff4zi4mLcdNNNqKqqwo8//ohOnTo11hr9gjRHpvahkRctgUxcuM7u/eRBeuxaIiIiUszpjMz111+P9u3bY9++fXjrrbeQlZWFd999tzHX5lccTfbNLbYEMmGBdu8nD9JjRoaIiEgxpzMyv//+Ox555BE8+OCDaNu2bWOuyS852iLKLakEAMSF1Z+RMYmAySRCZQlsiIiIqGFOZ2Q2btyIkpIS9O7dG/3798d7772HvLy8xlybX5HnwdTaIsotaWhrqSZw0bNziYiISBGnA5kBAwbgww8/RHZ2Nu6//358/fXXSEpKgslkwqpVq1BSUtKY6/R5NZN9a4KRSr0RJZUGAECsw62lmoeAs2SIiIiUUdy1FBISgnvuuQcbN27E/v378dhjj2H+/PmIi4vD+PHjG2ONfqFma6kmGJHqY3QaFcID7e/iWWdkGMgQEREp43L7NQC0b98eCxYswLlz5/DVV195ak1+qab92iqQkepjwnUQBPu1LxoVt5aIiIhc5VYgI1Gr1Zg4cSKWL1/uiS/nl7R2JvRK9TGxofbrYwBAEAQ5mGFGhoiISBmPBDJkf0LvxZL6W6/l+6p5cCQREZErGMh4iMZO+7X11lJ9tHIQxECGiIhICQYyHlKztVS32NfRDBmJxs59iYiIqGEMZDzE3tZSrtNbS8zIEBERuYKBjIdo5LOW7BT7Nri1xGJfIiIiVzCQ8ZCagXjWxb71H09Q575svyYiIlLEpwOZ559/HoIg2Hx06NDB28uyS+48sgQjBqMJl8qqASjpWmJGhoiISAmnD430ls6dO2P16tXy5xqNby5Z6jySMjJ5pdUQRUAlAFEhAYruS0RERM7xzajAikajQUJCgreX0aDaWRVphkxMqA7qBk60rp3NISIiIuf49NYSABw/fhxJSUlo1aoVpkyZgoyMjHpvX1VVheLiYpuPy6H2ZF9nZ8gA9utriIiIqGE+Hcj0798fn376KVasWIH3338fp0+fxlVXXVXvSdvz5s1DRESE/JGcnHxZ1qqutT3kbOs1YN21xIwMERGREj4dyIwdOxY33ngjunXrhjFjxuC3335DYWEhvv32W4f3mTNnDoqKiuSPzMzMy7JW+bwkKSPj5DA8wHpriRkZIiIiJXy+RsZaZGQk2rVrhxMnTji8jU6ng07XcPDgaVp17YyMc63XtvdlRoaIiEgJn87I1FZaWoqTJ08iMTHR20upw/qYAVEUrYbhNby1xNOviYiIXOPTgczs2bOxfv16nDlzBps3b8akSZOgVqtx6623entpdUgt1IA5mKmpkXE+I8OuJSIiImV8emvp3LlzuPXWW3Hp0iXExsZiyJAh2Lp1K2JjY729tDqkjAxgzqxcLHZla4kZGSIiIiV8OpD5+uuvvb0Ep1kHMtVGEy6WWraWlBT7skaGiIhIEZ/eWvInGqutpUulVfJgPKcCGal1m11LREREijCQ8RC1SoBgScpkF5m3lSKDtdBp1A3eVx6mx4wMERGRIgxkPEgq+D1fWAHAufoYgIdGEhERuYqBjAdJAUmWHMg03HoNWG8tMSNDRESkBAMZD5LmwWQpzMjUbC0xI0NERKQEAxkPktqoswrNNTKxThwYCdQcGsmtJSIiImUYyHiQuk5GxrmtJW2tc5qIiIjIOQxkPEjOyBSZAxlnWq8BZmSIiIhcxUDGg6Ri30q9ObOitGuJ7ddERETKMJDxIKnYV+J0sS8H4hEREbmEgYwHSVtLkjgnTr4GeEQBERGRqxjIeJD1eUvBAWqE6pw7ykrDQyOJiIhcwkDGg9RW5y05u60EsGuJiIjIVQxkPEhrVSPjbOs1wK4lIiIiVzGQ8SDrrSVnh+EBVpN9mZEhIiJShIGMB1kX+yrZWpLOWmJGhoiISBkGMh5k3X7t7DA8gHNkiIiIXMVAxoM0NhkZ52tkaraWmJEhIiJSgoGMB2nV1sW+3FoiIiJqbAxkPMim/VpBsS+3loiIiFzDQMaDXG2/loqEubVERESkDAMZD5IyK1q1gGbBWufvp+IRBURERK5gIONBUrFvbKgOgiA0cOsaWh5RQERE5BIGMh4kbS3FOnlYpETDgXhEREQuYSDjQdYZGUX3Y9cSERGRSxjIeJC0RaSkY8l8P3YtERERuYKBjAeN7ZKA3i2bYXKv5oruJx8aya4lIiIiRTTeXkBT0j05Et8/OEjx/aTaGmZkiIiIlGFGxgdIGRmTCJiYlSEiInIaAxkfoLE62kDPziUiIiKnMZDxAVqrow04S4aIiMh5DGR8gE1GhnUyRERETmMg4wM0KutAhhmZpqKi2ojsogpvL4OIqEljIOMDBEGQgxlO9206Zn+3F0MXrMOB80XeXgoRUZPFQMZHyMcUMCPTJIiiiI0n8qA3ili267y3l0NE1GQxkPERWvmYAmZkmoJLZdUoqtADAFYevABRZIBKRNQYGMj4iJqDI/mC1xSczC2V//98YQUOZhV7cTVERE0XAxkfIR9TwIxMk3DiYqnN5ysPXvDSSoiImjYGMj6i5pgCZmSagpO5ZQCAhPBAAAxkiIgaCwMZHyFlZNi11DRIGZm7BqdCoxJwLKcUp/PKvLwqIqKmh4GMj5BqZDhHpmmQamR6t2yGga2jATArQ0TUGBjI+Aipa4lbS/6votqI84XmQXitY0MxunMCAGDFAQYyRESexkDGR2g1lowMt5b83knLtlKzYC2iQgIwulM8AGBPZiEuFFV6c2lERE0OAxkfoWFGpsmQApk2caEAgPjwQPRMiQQArDrErAwRkScxkPERWnmyLzMy/u7kRXNRb+vYUPmya6TtJdbJEBF5FAMZHyFlZPQciOf3pEJf60BmjCWQ2XoqH4Xl1V5ZFxFRU8RAxkdomJFpMmpvLQFAakwI2seHwWgSseZwrreWRkTU5DCQ8RFaNWtkmgKjScSpvLpbSwAwprO56Jdt2EREnsNAxkdoVOxaagrOFZSj2mCCTqNC82ZBNteN6WLeXlp/7CLKqw3eWB4RUZPDQMZHMCPTNEjbSmkxIVBbglNJp8RwtGgWhCqDCTf/31ZsOHaRp2ITkWKlVQZU6o3eXobPYCDjI2om+zIj489OSIW+caF1rhMEAf+6tiNCAtTYf74Id36yHbd8sBXpZ/Mv9zKJyE/ll1XjqlfX4ob/bmZNpQUDGR8hz5Fh15Jfkw6LbBNbN5ABgGu6JGLDEyNw75A0BGhU2HY6H5Pf34J/LNmJimq+wyL3VBmMHLrYxK0/louCcj0OZRdj2e7z3l6OT2Ag4yP8YY7Mtzsz8Y8lO1BSqff2UnyWtLVkLyMjiQ7V4dnrOuHP2cNxS99kqFUCVh/OwdJtZy/XMqmJevbHAxg4fw32ZhZ6eynUSDYevyT//ztrjjOLDwYyPsPXD40URRGv/3EUqw/n4o+DOR79uk2FKIryqdeOMjLWkiKDMH9yN8z6WzsAwG4ffvGp1Bu5J+/jKvVGLN+bBVEENp7Iq/e2xZV6/LIvq8E3TnmlVXj6h/1ygE7eJYoiNp64CMDcIHKuoALf7Tzn5VV5HwMZH1GzteSb0fX5wgrkFFcBAPafL3L765lMIt5cdQzdX/gDKw5ku/31fEF+WTUKy/UQBHOxr7N6JEcCAPadK2ychbmpqFyPIZY9+SoDgxlfteXUJVTqzc8fh7OL673tG38cw/Qvd2PJlvqzgB/9dRpfbsvA638cVbyesioDZn27Bz/t4faHp5zILUVOcRV0GhUeG90eAPDe2uOK/i7fXHUMs77dg2qDb77WuIKBjI+o2VryzQxF+tkC+f/dDWRKKvX45+c78faa4yiuNGDZrqbxRCcV+jaPDEJQgNrp+3VpHgEAyMyvQEGZ70393XwyD3ml1TiUXYzPG3jhI+9ZazVo8ciFknpvu9NSYL711KV6b7cn0/x3n362QHH29H/p57Bs13nMWbYfReWNtx29+WQeftpzvklldx2RMm390qJw9+BUxIfrkFVUiW93ZDp1/4Kyaryz9jiW7TqPP5rQuW8MZHyExtJ+7atbS7usApmDWUUu1/KcvFiKiQs3YfXhXAiW7uRdGcqfJH2RdMZSm3rqY+yJCNLKGZx9Hsh2edq20zVdVe+uPcEjFnyQKIpYe6QmkDmdV+ZwK1BvNOHYBXPQXV8W0GQSceC8ObOTU1yF84UVita0fG8WAKC82ogvGqn+y2gScf/n6Zjx9R6sOuS5LW9nlFcb8Ou+7Mu65brxuDmQGdwmBoFaNaaPaAMAeG/dCafWse30JUhPtV9sbTpvShjI+AitZeaIr24tpWfUBDKVepP8oq3E2iM5mPjeJpy8WIbEiEB8e/9AaNUC8kqrkZmv7EnSWaIo4n/p5y5Li7Nc6OtEfUxt3VqYszL7fLBORnrXrtOoUFShx7trT3h5RVTbsZxSnC+sgE6jQligBkaTKGcIazueU4pqyxuRnOIqh11Op/JKUVpVM7jROivbkHMF5Ta3X7zpTKO84J/OK0NJpXmNL/566LIGFf9ddxLTvtyFx77de1m+n95okv8Wh7SJAQDc1DcZSRGByCmuwlfbMxr8GltO1mTgtp7Kx4nc+jN3/oKBjI/w5YxMWZUBh7PNv/Cp0cEAnK/n0BtNWH0oB9OW7sK9S3aipMqAvqnNsHz6EPRNjULnJPMLeHpG4wQaqw/nYvZ3e3HXJzsazCRkXCpX/K7TmvTCoTQjAwBdLdtLe8/5VkamoKxa3qaYP7krAOCzLWdwJs9+IHsitxSHsuqvzyDPW3PEnI0Y3CYGnRLDATjeXjpUq35mr4O/5b2Ztr+LSgKZn/ea6976pjZDYkQg8kqr8GMjtAofzKpZY2Z+Bf5v/SmPfw9HNp00Z0d+3Z+N3/c3fp3fnsxClFUbERUSID/GOo0a069uCwD4758nGxzhsMUSCEUGawEAS7c1HPz4AwYyPsKXD43ce64QRpOIpIhAjOpoPi+ovjoZURSxJ7MQz/10AP1fWYN/fLYTv+7PhigCU/qnYOk/BiA2TAcA6N2yGQBg19lCj69bFEW8/6c5e1BSZcDHG087vG3GpXKMeWsDJry30eV3de5kZLpbCn73ny906Xs3lu1nzAFmm7hQTOrZAkPbxUJvFLFg5ZE6t/12ZybGvr0B4975Cy/+cnnfHV/p1lm2lUZ0iENHy4vc0Qv2A0rrF38ADlu1pTcrzSPNR20oCWSkAt/JvVrgnsFpAIAP/joFk4fnZElBc0vLG6z//nkC5wrKPfo97KnUG3HA6jnw2Z8OIL+R69v+smwrDWodDZXV1PC/926BFs2CcLGkqt4RDhdLqnAsx/wc9fz1nQEA36efaxLzqxjI+AitDw/Ek+pjerVshq6WLZD6Apl31pzAxIWbsGTLWeSXVSMmVId7h6Th10eG4OVJXRGgqfm165ViDmSUPEk6a8eZAuzKKJQ/X7zpjMOszCu/HUaF3oi80mrsylC+lopqo5zNaR3rfMeSpHNSOFSCOdWfU+z8QLNdGQUY8upaLFx3olHqjKRU9oBWUQCAZ8Z1hEoAftt/ATstQY7JJGLe74fxxP/2yRnFjzeexsSFm3C0gaJTcl9BWbX893N1hzi0TwgD4Dgjc9Dy4t/H8ibCYUbGkh28Y2BLAOZOqLKqhs8IO55TgiMXSqBVCxjbJRG39EtGWKAGpy6WYfVh+3UsJ3JLsflkHnKLKxX9HkvZpQeGtUb/tChUGUx4+dfDTt/fVQfOF0FvFBEdEoB28aHIK63GCz8ftHvbE7mlePqH/S49r1jbZCn0vaptjM3lARoVHrFkZRatP+mwG0n6W+6YGI7x3ZOQHBWE4koDft6X5da6fAEDGR/hy0cUSE+SvVs2k7dADmUV280eiaKI79LNFfR/6xSPT+/ui61zrsaz13WSt5Gs9WoZCQA4csG5J0klFq0/CQC4tV8yOiWGo7TKgA//qpt63nLyElZYnUi9qYEZHPacyiuFKALNgrWIDtUpvn9wgAZt48wvQEqGmf26LxvnCirwn5VH8dT3+z3++7PtlDlY6Z8WDQBonxCGm/okAwBe+vUwyqoMeHBpupzSf+TqNvjozj6IDgnAkQsluP69jfhk42mPvxO/0hy5UIwVBy7YfZHfcPwiTCLQISEMzSOD0KGeQEYURRy2BDK39ksBAOw7V1Tn8ak2mOQgYUznBCRFBMIkOve7KRX5DmsXi4hgLcICtZjS3xwMfbCh7t/fxxtPY9Qb63Hbh9vQ75U16PbCH5i4cBNmf7cXq+sp4BVFUQ7KOieF4/nxnaFWCfj9wAWX/oaVsH5O/M/fu0MlAD/tycIftU62/21/Nia8txFfbsvAfUt2Iq+0yqXvV1ypxx7Lv/2QtrF1rp/Uqzliw3TIK63G+mMX7X6NzZb6mIGtzBmd2/qZH5OlTaDol4GMj9D46KGRJpMoZzV6t2yG1OgQhOo0qDKYcNxOMeHJi6U4V1CBAI0Kb9/SA8Pbx8k/mz2JEUE1T5IenKNy9EIJ1h4xd0bdd1UrzBxlfsfy6aYzNilgo0nE3F8OAQBaWE6r3nSi/pZUe6TiZ1e2lSTdnMh21Wb9GHyzMxP3fbbTYwFhUbkehy3bE/0tGRkAmDW6HYID1NiTWYhRb6zHyoM5CFCr8NbNPTBrdHuM6hSPFTOH4uoOcag2mDD3l0O4d8kOnwzS/UFJpR63fbgND3yRju/S6w4/W2Npu766QxwAoF18GATBvJVQ+4UzM78CJVUGBGhUuLZbInQaFUoqDTh9ybbm6VhOCaoNJoQHapAaHYxeLZ3LnIqiKAcy13dPki+/e3AqtGoBO88WyIX3oiji1RVH8KLl7y8pIhAqASipNGBPZiH+l34ODy3dZVNwbO1CcSXyy6qhVgloFx+GjonhuGOA+cX5ueUHG/X3Tcqu9G7ZDN2TI/HPoa0BAM/8eACF5dUwGE14+ddDeGjpLpRVG6FVC7hUVo05y/a7lDndevISjCYRaTEh8lafNa1aheu7mf+9f3Qwt0fKyAxqbX5TclOfFtCqBew9V4T9btTm7TiTj1nf7vHq6AgGMj7CV7uWTuWVoqhCj0CtCh0Tw6FSCejS3LwHb++XX2oBHdAqGsEBGqe+R0+5TsZz20v/Z8nGXNM5Aa1iQ/G3TvHo0jwcZdVGm6zMtzszcTi7GOGBGnxwRx8A5tqAogplcy/cKfSVSIGMkoLfk5bv+8Cw1gjUqvDn0Yu45YOtuFji2js/a9vP5EMUgVaxIYgLC5QvjwsLxAPDzE/c2UWViAoJwJf39cfEns3l28SG6fDx1D54aWIXBGpVWHf0Ir7d6dysC1dkF1Xg002nm2Rr+Ccba4LvuT8fsqkBMRhN8jtwKZAJ0WmQEmWuGam9tSfVx7SPD0OgVi3PMKpdvC+9qejWIhKCIMi1bOkNbI/sPVeEs5fKEaRV42+d4uXL48MDMcny+/F/60/BYDThye/34f0/zX+nj49pj01PXY1Dc6/ByplDsfC2XogP16HaaJK3MGuT6mPaxIYiUGue2/ToqHaICgnAidxSLNl8pt61ukoURaRbavqkf5eZo9qidWwILpZUYc6y/bjto2348C9zTd79Q1th2YODoVULWHUoB/+zE4w2RMowSd1K9kj/vqsP5dQ5Ria7qAKn88qgEoB+ljcl0aE6jO2SCAAuH49SqTfiye/3Ydmu83hn7XGXvoYnMJDxEb7atSS9A+veIhJayxql7SV7mYN1R8xPqiPa101/OtLbUidjXc/ijvOFFfK7QukFVxAEzBxpPgpgyeYzuFRaheJKPV5baZ5YOmNUO3RKCker2BCYxIYHhdXmTqGvpFuLSADmFxVn3rWVVRnkupz7h7bCl/cNQFRIAPafL8IN72/CKTfHym+T62Oi61z3j6vS0CslEr1SIvHjQ4PRJzWqzm0EQcDtA1riqWs6ADCfC1NfAfCB80X4fMsZxdtQOcWVuHHRFjz/8yHc9uE2xUGoLyssr8ZHlsA7JlSH0ioDHv9un/xvtCvDHHRHBmvR0/J3BMDh9pK0FSN1vcjBc60OpX2Wz6Xre1u92ajv8Vm+x/x397dO8XXeyPxzaCsAwKrDObjzk+34duc5qARg/g1dMW1EGwiCgECtGu0TwnBtt0QMaWN+Dtl+2n4gY72tJIkI1uKJMeaJt2+vPt4og/gy8yuQV1oFrVqQA8FArRoL/t4dggD8fuACtp/OR6hOg0W398KccR3RtUUEHrUcRfJCrWDUGX9JgUxbx4FMl+bm568qgwkrax0jI7Vdd2kegfBArXz57ZYM1k97slDswhl676w5jlMXyxAXpsPMUe0U399T/CKQWbhwIVJTUxEYGIj+/ftj+/bt3l6Sx8mTfZ3IyBhN4mUbIGe9FyzpKr3g1gpkiiv12GF59zSifZzT30NKW3tqMN7Hf52GwSRiYKtouRsIAEZ2jEO3FhEorzbig79O4b21J3CprBqtYkNwp6WgUXrHs1nBHrsoijhiqSdoHae80FfSITEMWrWAwnI9zhU03AYuBU8xoQFoFhKAXinN8P2Dg9AyOhiZ+RW45YOtDtuknbH1tPnJr39a3SAlOECDZQ8NxrKHBiPF0jHiyK39U9A8Mgg5xVUOJwPnFFdiykfb8OxPBxUVHxZV6DH1k+3yv9eh7GLcvXi7x+utHDEYTfjor1OKfl+U+L8Np1BSZUDHxHB8e/8ABGnV2HLqEpZsOQOgJgM6vF0s1FadLB0SLC3YtVqtpbqXzpasqnQ8xp5atS/WGRnAXCAapFWjuNLg8Nwlo0nEL5bHbrzVtpKkTVwYRnWMgyia6zUCNCq8f3tv3GKp1alN2s7c5jCQMT//dLIKZADgpj7JSI0ORkmVoVHGOkhfs3NShJwJAszPkfddZQ7W2sWH4qfpg3GNJeMBAPcPbY3eLZuhtMqA2d/tdTpgzyqswKmL5myKvTcVEkEQMLGHOStTu9VdCmQGtra9f9/UZmgXH4oKvRE/KJywfuB8Ef7PUvP00sQuiAjSNnCPxuPzgcw333yDWbNm4bnnnsOuXbvQvXt3jBkzBrm5uQ3f2Y9I714y8yvq/QW/UFSJ/q+swX2f7bws67IbyFjehRzOLrbZh950PA8Gk4hWMSFIVXDWUKfEcOg0KhSW63HKjRdewPwO9usd5tkIDwxvbXOdIAhyrcxnm89i8SZz6vfZazvJ2abBlkCmoUP3rO3KKMDJi2UI0KjQI7lZw3dwQKdRyy9AztQL2dvOSosJwfcPDkL7+DDkllRhykfbXGpHLa7Uy6n7+p48naHTqDFjlDTr4kSdtLcoinjq+31yJsWZwV6AOa1935KdOHKhBLFhOnxwR2+EB2qwK6MQ932202H2x5Ptpu+sOY6Xfj2MaV/ugtHDBc25JZX4dNMZAMBjf2uHVrGheHqcObs1//cjOJFbirWW+TFXd4y3ua+UkTmaY39rScpidLcEKoeyi+Vul4pqo1x7JQU6WrUK3ZMtM58cbAFvO30JuSVViAjSYmg7+xlZKUMaptPgs3v6YUznBIc//wBLgfm+c4V2HzMpKKsdyKhUghyANXRUgyvsPSdK5oztgO8fHITl04fUyc6qVQJev7E7grRqbD2Vj8VObn1Jz0XdkyMbDBYm9DAHkFIHmESaHzOw1t+yIAhyIfbSbWedfiOpN5rwxP/2wWgScW23RIyu53G8HHw+kHnjjTdw33334e6770anTp2waNEiBAcH45NPPvH20jxqUOtohAVqkJFfjg3H7VedA8DiTaeRV1qF1YdzHU7ulKSfza/zTkuJgrJquYjVOm3dMioYYYEaVBtMOGb1RLnWapaFEgEalZzCdvQkefJiqVMHo3225SzKq43olBiOoXbSsCPax6F7iwhU6I3QG0UMaxdrs94BraKhEszFu44mntb2ycYzAICJPZIQFRLg1H0ckSf8OlEnc9xBXU5MqA5f/KM/WsWG4HxhBW77cBuyi5QN+tt5Jh8m0RwYxYcHNnyHBtzQszlaxYagoFxfZ57Ptzszse7oRQRoVBAE88TR0w0EtAajCdO/3I3tZ/IRFmh+URzdOQFL7umHkAA1Np+8hGlLd8mBdnGlHl9tz8Df39+Mjv9egQc+T3e7jmjziTy8u848p6igXI/dbrbX1vb+nydRoTeie3IkRnY0/47ePqAlrmobgyqDCfd/vhPHckqhVgkYVquTpYM8S6ZEDrDySquQU1wFQajJ2LSMDkZEkBbVBpNcT3MwqwhGk4i4MB0SImoe+94NFPz+bNnOHdslwWbEgrU+qVH43wMD8fvMqxoMkJOjgpAYEQi9UazTulxUoZengXdOrNsNKbWgN0b7f+36GGtSPZF1psZaakwInrm2IwDg1RVH8H36Oaw4cAF/HLyA1YdysPZIDo7llNgExdKxBFfVUx8jaRkdgl4pkTCJNd1jmfnlOFdQAY1KQF87W8CTejVHkFaNYzml2O3k68UHG07hUHYxIoO18kwab/LpQKa6uhrp6ekYNWqUfJlKpcKoUaOwZcsWu/epqqpCcXGxzYc/CNFpcGNvc1uroyK1sioDvrR6t1rfqbKZ+eW4+f+24rYPt7qcZt9tOTCuVWyIzQu0SiWgi6WVWhoKZTKJ+POYVB+jLJABaubJ2Hsx+G5nJka+vh7PL7c/p0FSUW3Ep5Z/u/uHtYIgCHVuY87KmPdy1SoBz17X0eb6iCCtvHXmTAvnuYJy/G45vfueIWkN3r4h3a3qZBoiBbJS27a12DAdvvzHALSMDkZGfjmmfLgNuSU1gVl+WTWW783Csz8esNviulVuu677xOcKjVqFx/5mrl346K/TcvFqZn455v5s7lp5fHR7DLO8k6+vMFgURTz9w36sPpyDAI0KH93ZRx4C1zOlGT6+qy90GhXWHMnFg1/swiNf7Ubfl1ZjzrL92Gl5EV5x8AJGv7key/dmubSdebGkCjO+2QNRhPyiveaI57LEWYUVWLrV/Lf++Oj28u+yIAhY8PduCAvUyG8yerdshohg23fqKVHBCNSqUGUw4YylI0nKsKVFhyBEp5G/Xk2ReaHlv1J9TKTN16yv4LfaYMJv+82tx/a2laz1SY1Ci2b1b0dKa5N+/2pvL0k/S/PIoDo/O2CVkfJwIFNaZZAHDdoLZJwxpX8KhrWLRbXBhMe+24sHvkjHPz9Pxz8+24l7Pt2J0W9uQJfnVmLy+5vx/PKD+MvyxnawE4EMALno/idLvZK0rdQ9OVJ+3K2FB2rlwmxnJhSfyC3F22vMhb3PXd9JHm7qTT4dyOTl5cFoNCI+3jZtGh8fjwsX7J/cOW/ePERERMgfycnJl2OpHiENnvrz2EW7tQ3f7cxESaVBrqf5Yfd5h9tQX+/IgMEkorzaiN0uFtHKKdSUun+wtTMHB7OKcbGkCiEBavRNU/4H3svBhN/iSj3m/26eIrts1/l6C9J+3peF/LJqtGgWhGu7Jjq83fD2sXh5Uhe8P6UX2tgJAgZb9pGdCWQ+33IWJhEY3CZafpfrDmng4IHzxQ3uoTfUKZUQEYgv7xuA5pFBOJVXhikfbsNrK49i/Hsb0fulVXjkq934fOtZPPBFep0aj/oKfV01tksCOieZ5/ksWn8SJpOIx/+3F2XVRvRNbYZ7hqThlr7mv9f/pZ9z2D773z9PyoWi793aE/1rrXFAq2gsuqM3tGoBqw/nYPneLFQZTGgbF4qnxnbAl//oj06J4Sgo1+ORr3bjoaW7FM33MJlEzPp2Dy6WVKFtXChenGB+R2p9+rS73l17HNVGEwa0isLgNrY/X2JEEF4YX/MueKSdDKhaJaB9vO2LuVQc27HWVoy0fSTNiJGC6O4tbDMdPS3bpqcultWZYrts1zkUVegRF6ar83i4o59le2lbreJ7udYnyf7fnJSROXmx1KNt2HszC2ESzQGUq5lKQRDwnxu7YXSnePRLjUKfls3QKyUSPZIj0TkpHMEBalTojUg/W4BPN59BQbkewQFqm6x4fa7tmgi1SsD+80XyoEGg7raStXGW58vf9tufVSQxmUQ8+f0+VBtMGN4+Vq7J8TafDmRcMWfOHBQVFckfmZmN1/LpaWkxIRjePhaiaN4isWY0ifKe6hNjOiAkQI1zBRUO3x19Y3Ws+3YH7YsNqW8vWKrWlzIy646an8QHt4mBTmM/rVofKSNzLLfEJlh5d81xXLI8aVYZTPhtn+N3DNLPfFv/lHpn10j7wo72dYdY1cnU90dtnSGTxrC7q21cKAK1KpRWGXAqz/HWYZXBiLOWd9pt62n5bh4ZhC/v64+E8EAczy3Fe+tOYN+5IoiWAWq9WzaDwSTigS/S5cCopFIvd6RZz49xl0olYLalo2TJ5jNYsPIotp7KR5BWjddu7A61SsDVHeIRExqAiyVV8th9a6fzyvD2avO7wZcmdnX4GI5oH4eFt/VCx8RwTB3YEsunD8Yfjw7FA8NaY1CbGPw0fTBmjmoLjWWA2ug3N8i/ww1ZtOEk/jqeh0CtCgun9MKYzglQqwQczSlBZr774/HP5JXh253mFt3ZVtkYa5N6NsctfZOREB6I8T3sZ0BqF/zWro+R1HTLFdn8t5tVoTwANAsJkKdWW2dOT+SW4gVLVu3uwWk2Rcfukn7/dmcW2tQ81fwsdbeVAPPvfahOA71RxCkXDrh1pL7nRCXiwgLxwZ198O0DA/G/Bwdh2UOD8eO0wfj1kauw//kxWD1rKN66uQfuHZKGq9rG4OlxHR1u19UWHaqTM5s/7Tkv18cMau04kBnePhbBAWqcL6yod1v7861nkX62ACEBarwyqavd301v8OlAJiYmBmq1Gjk5tqnvnJwcJCTYfwLT6XQIDw+3+fAnUwelAjBnX6y3hFYfzsHZS+WICNJiyoAUuRr+BzsHsf1x6ALySmveMe1wUPVfH73RJLdk2vujlTIyh7PNg7NcrY+RxIbpkBIVDFEE9lgySKfzyuStIql48Ptd9mcwnMgtQfrZAqhVAv7eq4VLa5D0atkMOo0KuSVVDjs0pLWUVBqQFhPi0naaPRq1Sn5yru8J5XReGUwiEBaoaTC12zI6BF/e1x+D20RjQo8kvHZjd2x7eiRWzByKpf/oj14pkSiuNODeJTuQX1aNnWcLYBLN9ROJEXWHb7ljeLtY9E1thiqDSZ68/PS1HdEy2vwCGaBRYbLl8ft6h+2bEFEU8dzyg6g2mjC0XSxu7Vd/tnV05wT8PuMqvDChizwPRaJVqzBzVDv8NH0wOiSEIb+sGvct2Vnvdi1grh16/Y9jAIC547ugXXwYIoMD5L8RZ4Oh+ry1+hiMJhHD28fabWsHzMH4/MndsPXpkQ4fIykrcdiSkanJYti++EuZl2O5JcgqrJDrk7o1rxsk1K6TqdQbMf3LXajQGzGodbTcYu0prWJCEBOqQ7XBZDNVWNpaql3oKxEEAe3izQH+EQdnTrnCU4FMfdQqAW3iwjCxZ3M8e10nfH5vf7lN2llS0e+SzWeQU1yFALVKznrbE6hVy8/dvznYXqoyGCGdfP/k2A5IsjOYz1t8OpAJCAhA7969sWbNGvkyk8mENWvWYODAgV5cWeMZ1jYWaTEhKKkyYJlVkCIVSN7WPwXBARrc0Muc0vt1X3adItgvLCOnx3U1B3u7Mwscnr/hyJHsElTojQgP1NidjZISFYzwQA2qLUfLS/vr7ryg90qJBFDzZPHyr4egN4oY0T4WCyZ3g0own5909lLdd1jSO9gR7eMQ52ZxaqBWLRfFSYV2tZlMIhZbOkruHpxqc4ibu5wp+LXeVnLmXVGr2FAs/ccAvH1LT/y9dws5LR6oVePDO/sgOSoIZy+V4/7Pd+KvY+af2VP1MdYEQcDjYzrIn1/VNga397dtv73Jsr3059FcmyLlFQcuYMOxiwhQq/DC+M4eeTfYOSkCy6cPwcQeSTCYRMz8Zg++dHAi8O6MAjzy1W4YTSIm9EjCjX1qAmZpGN1aN+tktp66hB8ttQ2zR7d362t1SKzZWiqrMsgBijRDRhIXHoikiECIYk3HWEpUMJrZKVyvHci89OshHLlQguiQALx1cw+PZmMAS52MJSsjzZOp1Bvl339HW0sA0D6hpuDZE8xTzhs/kPEE8xwfc7s8APRMiXRYgCyRtuN/O5BtNxP9675s5JVWISE8UD7ewlf4dCADALNmzcKHH36IJUuW4PDhw3jwwQdRVlaGu+++29tLaxQqlSCP2f5s8xmIooj954qw/XQ+NCoBUwemAjDXAcSH61BUoZeH0AHmF7itp/KhEoBnru2EZsFaVOpNOJDl+EXRHmmMeK+Wzey+SAuCINdzvLfuBETRPGvCustBKet5Mn8dv4jVh3OhUQn413WdkBARKBe7fV9r3kG1wYTvLdMyb+7rmZqomjZs+4Px/jyWi9N5ZQgL1MgZBE+RCn7ra8GuKfR1fQCfJDpUh0+m9kVYoAY7zhRg8WZz0Cydr+Rp/dKicEvfZLSLD8Wrk7vVCUhax4aiX1oUTCLwP0uAWl5tkI+SuH9YK6QpaO9vSIBGhTdu6oHbB6RAFIGnf9gvT4YGzAXJD3+1G5P+uxlZRZVIiwnBy7XS6lKdyuaTl1Be7VpxfaXeiKe+3wfA/Iali52MiBLS1lJGfjnSzxZAFIG4MJ3dDJ60vfTV9kzL5/a/t/QCvvdcIZbvzcIXloLkN27u4fYbCEcG1Cr4PZ5TCoNJRGSwFon1PN94uuD3xMVSlFQaEKRVy1/bVwUHaHCN1bbroNYNFwoPbx+LQK0KmfkVOHDeNoslijVv3O4Y2FIeV+ErfGs1dtx888147bXX8O9//xs9evTAnj17sGLFijoFwE3J3/u0QHCAGsdzS7H55CV8vNE8dOi6bolyoKBWCZhgZ/iRNGr66g7xaB4ZJKemlW4vpUvnK9VTYNa1eSSAmndKV3dwfpqvPVKdzJ6MQvn8lTsHpsoZob/3NgcMy3adsymEXXskB5fKqhEXplM0Ubg+Up3MtlOX7B6OKWXIbu2XYrcTwB1SgHgoq9hhoaKj1mtXtY0Pw/tTekOtEiC9GfNkfUxt8yd3wx+PDnOYnpaKfr/ZmQmTScQ7a04gu6gSLZoF4aHhbTy+HpVKwIsTuuAhy+yheb8fwbzfD2Peb4cx8vX1+HlvFgQBuLF3C3xz/wCE1nrM28SFIjkqCNUGk92zukRRxEd/ncLiTY4P0Hxr9XGcuVSO+HAdnhrbwe5tlIgKCUCcJWiRtqAdZTCkwZFS0XP3Wh1LklYxoYgIMr85euzbPQDMs2GGOZgb4wlSwW/62QLojSabWp/6snINnQIOmI8yGf6fdU5NwZannCdH1FuD5ysmWB0ZUnsQnj3BARo5s/jbAdvtpfSzBdh/vgg6jcrnsjGAHwQyADB9+nScPXsWVVVV2LZtG/r37+/tJTWq8ECt/C7/zVXH8IulwPXeIbb7z9LZGmuP5KKoXI+KaqOcmZgywPzL1k8KZBQU/JZVGbDR0vJXXwq1a613jO7WiXRICENwgBolVQYcyylFs2AtZoxsK18/ulMCQnUanCuosClglmopJvdu4bEnmE5J4YgI0qKkylBngvGRC8XYdOISVALkicCelBYdgjDLwZzHcuw/CZ+sp/XaVUPaxuDFCV0AmF+YnWmRbSxjuyQiLND8WH+25Yw8pv/56zsjKEB5MbkzBEHAE9d0wJOWIxX+b/0p/N+GU6g2mjC4TTR+eXgI/nNjd5tzp6zvO7KD+c2VNKTO2vK9WXjp18N44edDeGrZvjrD8w6cL5LPAHtpYlebMfLukObJrDhg7vJ0VFMiDbuTOMrIqFSCvAWsN4rolRKJx0Y37mj6tnGhaBasRYXeiH3nihzW+tQmZU3OF1bUGcIIWILLjadx5lI53rPUftTnctTHeNLg1tHo1iICHRLC5M60hkhnL/2+33Z76RPL8NBJPZu7PSurMfhFIHMlkl4gd54tgMEkol9alPxOXdIxMRwdEsJQbTThtwPZ+HlfFoorDWjRLEgekNU3TQpk6j8jxdrnW8+ioFyP1Ohg9KunTsL6ya72WS+u0KhVNu8EZ41ubzMjIihALdf9SAFbdlEFNljm19zUx3Ot9mqVIFf5b7LUyYiiiGM5JfjPCvP5TGO7JDbKi71KVbNtV/sMHMA8DE7qxPBURkZyW/8UfPfAQCy+q69Hv65SQQFqubXz+Z8PwWASMbJDHEZ1avxM7IPDW+OliV2gUQloGxeKxXf3xRf39m/whVN6N7vmcK7Ni0BRhR4v/nJY/vzbnecw85s9crat9pTUv3nwZ+xoeTGvsHT8OPoZujaPgJTcUAmod1tLyvKGB2rwzq09G32bQaUS5Oeh7afz65wX5UhkcADiw80ZKXtvCI7mlMhDEZfvzWpwAKa/1MdINGoVfpo2GCtmDnW64+nqDnHQaVQ4c6lcDhjPF1bIZzfdNTi1sZbrFgYyPqptfJjN/Ih7HQxbk4Yf/bD7PJZaihRv658i17V0TjKfkVJUoZe3I+pTVmXAB5bzMx6+um29GY4WzYLkkdlD28Z6pNCvT6r5SaJ9fBhutVPvImWqftufjfJqA/638xxMorkw1ZN1E0BNncwv+7IxZ9l+DJ6/FqPf3CAPPrtnSKpHv5816cXCXvFoZkEFqo0mBGpVaN4InQN9U6OQHOW9bIzEut5Jp1Hh+fGXb4Lo7QNaYte//4aVM4diRPs4pwqL+7eKQnCAGrklVfKLLQD8Z+UR5JVWoVVsCN6+pQc0KgE/783CQ0t3ocpgPo39UHYxIoI8PyW1fa1aDkdbS2GBWnkLt01caL3bpbf1S8FNfVrg47v6XrasnVSvteXUJRxuYIaMNang1972knUhv8EkyudX2ZNfVi2/eejpxjEkl5vSgvgQnQbDLdvzv1sGHH625QyMJhGDWntmVlZjYCDjw6TZJGkxIRjV0f67tAk9kiAI5ncqezMLoVULNpkJrVqFXi0jATg3T2bJljPIL6tGWkyI3MLniCAI8pClsV08c9bGPYPT8I8hafjv7b3sBlHmF9kglFUbseLABXxjmQB7SwOtuK6QApmjOSX4ansGsooqodOoMKxdLP47pRd6t2y8GhLp33PD8Yt10uLHLe8uW8eGerRbytd0aR4htwY/NLzNZQ+uwgO1iv59dRq1XFslBaC7MwrkNxgvT+yKCT2a44M7eyNAo8KqQzm44+PteMsyF+fZ6zw/JdX6hSdUp0FyPYGHlA2tPdG3tmYhAVjw9+52x903Fqlea+PxiyivNkKnUTn1xqW+gl/pDKMBlq+9dOtZh1PQpbk5rWND7HZzNSU1w/HMbxa/thSAe2pWVmNgIOPDRnaMx6d398Vn9/RzmO1IjAiymdh4TZdExITaPhn2dbLgt9QmG9PGqXqTlyd1wef39sM1HgpkmoUE4F/XdbLb8g2Y08w39DRnZeb9fgTnCioQFqiR93Y9KTU6GJN6NkfbuFDcNSgVi+/ui73PjcaSe/rJf+yNpUNCGFrFhNjM6JGcuOi5jiVf995tvfDajd0xbUTrhm/sA6QzkdYcyYXBaMLTPxyAKAI39GouF1xe3SEei+/qiyCtGttP56PaYMJVbWMwuZfnp6S2jguRnzs6JYbXG5j9c2grDGsX6/FZMJ7QISEcYYEaSLvjHRLDnXp+kqYb187IVBmM2GY5huPZ6zohLSYExZUGfOfgaIz1xxquGWwqru4QhwCNCqfyyjDvtyMoqtCjZXSwvHXqixjI+Ljh7eMafCc60ao6fUr/uhXl1gW/9U2qXbL5DArL9WgVE9LgeSmS6FAdrmobe1knPErbS9L+9sQezRuckeAKQRDw5s09sGrWMDw/vjNGtI9rlO/j6HuPtdQD/VprmvGJHM92LPmy5Khg/N2DRdyNTSp435tZiNf+OIbDli2jZ8bZnuk1uE0MPr+3H8ICNYgI0jbalFSdRi1P43VU6CtpnxCGJff0Q7t432stVqsE+XkMaLg+RmJ9eKT1c9+us4Wo0BsRExqAjgnh8jlpn2w6U6cQe83hHHxumc31t07ePeX5cggL1GKopcZS+rmnDvTsrCxP849nB6rXuK6JaBcfiqHtYu0OMeuZ0gwalYDsokqcK7B/CnJJpV7Oxjwysv7aGG9LiQ62eVLz1OwYXyNlff48dhGlVilvKSNzJQQy/iYuPFAugpcmF88Z2wHRoXW3jPqkRmHzU1dj3ezhjbptNqSN+UWpMVukLwfrcQDO1McA5r8RtUpAUYUeOcU152lJ56gNbhMDlWUaeGSwFhn55Vh1qOYcv1MXSzHza/PhoHcMaOnRQmxfdm23moAtVKexGf7oi3z31YqcFqrT4I9Hh+Gze/rZfVcXFKCWuxActWEv2XwGRRV6tIoNwfVOZmO8SfrD6tI83O3BYb6qU2I4UqODbbaXRFG0murre++cCTYp+D4tm9XbTRcWqG30dtYnrmmP1bOGunx8iK+wHtDobCATqFUjNdocJFofVfCXJZCRapqCAtS4vb+5U/TDv8ytxqVVBvzz83SUVBnQp2UzPHtdJ/d/CD8xsmO8fDjx33u3QJiHxgE0FgYyV4h+aTXbS7UVV+rlP94ZI9t6fMx4Y5jcqwXeuKk73p/S29tLaTTm7SVL4Z1leymrqBLl1UZoVAJaRnu/s4jqkgrzNSoBL0/q6vWUfKBW3SSC3s5J4WgVE4LmkUHo6OTWElBT8CwV/BaV67HfMjV7SNuaibd3DmyJALUK6WcLkH62AI99uwcncksRH67Df2/v5XQLc1MQHqjF1IGpaBkdjPt8sGaqtivnkbnCSQW/2+0U/H66yZyNaRMXiuu6+X42BrAU/fZq4RNtwo1JOv9k3dFclFUZ5GxMWkyIz40JJ7MuzSPw6uSu+ODO3nXan8l1GrUKyx8egpWPDlVUq9a+VufS5pN5MInmDiTrAzfjrE4Sv//znVh5MAcBahXev7233SGITd2/ruuE9Y+PaJQRD57GZ8IrRB9Ltf3Ji2W4VFqzV/zNjgy8u9bc/vmIn2RjriSdk8KREhWMKoMJ647myq3XrI/xbTf3TcHVHa6MeorLKVSnqXM8RENqH1UgtV1f1bZuzdA/rjIX/eaVVgMA5k7oLB+dQr6LgcwVollIgHys/Y4zBTCaRLz86yE8+f1+6I0iruuWKL/7J98hCILNXIeTV1DrNZEnSLNkTlwshcFokgMZqT7G9rbhclH0bf1TcIsPnitEdXn2tDvyaX1To3AspxR/Hs3Ftzsz5QLSGSPbYsbItl7fyyf7xnVNwKL1J7H2SK48X6c1AxkipyQ3C0ZwgBrl1Ub8dSIPZy+VQ6MSMMDBQYpv3twD20/nY1RH/y6OvpIwI3MFkepkvt5hDmJ0GhXevbUnHv1bOwYxPqxr8wi0aBaESr1JHn3vycMiiZoylUpAW8tsnE8sp9b3TIl0uEUVFRKAa7ok+PQICrLFR+oK0tdqxkxcmA7f3j/QL1qtr3SCINhs+wkC0CrWs+dKETVlHSyBzF/Ha+bHUNPBQOYK0jwyCDf1aYGr2sZg+fQh6O7k0e7kfWOtApmUqODLNmGYqCmo3T12VVsGMk0Ja2SuMAv+3t3bSyAXdG8RgeaRQThfWIE2Ds6hIiL7OlgFMmE6jXxAJjUNzMgQ+QFBEDDJcqZWz5RI7y6GyM9YZ2QGtI5m/UsTw4wMkZ+YMaoterdsJp+iTETOiQ7VISZUh7zSKm4rNUEMS4n8hFatwogOl+8EbqKm5O7BqejaPILzspogQbQ+27wJKi4uRkREBIqKihAe7vz5HEREROQ9zr5+MyNDREREfouBDBEREfktBjJERETktxjIEBERkd9iIENERER+i4EMERER+S0GMkREROS3GMgQERGR32IgQ0RERH6LgQwRERH5LQYyRERE5LcYyBAREZHfYiBDREREfouBDBEREfktjbcX0NhEUQRgPg6ciIiI/IP0ui29jjvS5AOZkpISAEBycrKXV0JERERKlZSUICIiwuH1gthQqOPnTCYTsrKyEBYWBkEQPPZ1i4uLkZycjMzMTISHh3vs65J7+Lj4Hj4mvoePiW/i42JLFEWUlJQgKSkJKpXjSpgmn5FRqVRo0aJFo3398PBw/sL5ID4uvoePie/hY+Kb+LjUqC8TI2GxLxEREfktBjJERETktxjIuEin0+G5556DTqfz9lLICh8X38PHxPfwMfFNfFxc0+SLfYmIiKjpYkaGiIiI/BYDGSIiIvJbDGSIiIjIbzGQISIiIr/FQMZFCxcuRGpqKgIDA9G/f39s377d20u6YsybNw99+/ZFWFgY4uLiMHHiRBw9etTmNpWVlZg2bRqio6MRGhqKyZMnIycnx0srvvLMnz8fgiBg5syZ8mV8TLzj/PnzuP322xEdHY2goCB07doVO3fulK8XRRH//ve/kZiYiKCgIIwaNQrHjx/34oqbNqPRiGeffRZpaWkICgpC69at8eKLL9qcJ8THRCGRFPv666/FgIAA8ZNPPhEPHjwo3nfffWJkZKSYk5Pj7aVdEcaMGSMuXrxYPHDggLhnzx5x3LhxYkpKilhaWirf5oEHHhCTk5PFNWvWiDt37hQHDBggDho0yIurvnJs375dTE1NFbt16ybOmDFDvpyPyeWXn58vtmzZUrzrrrvEbdu2iadOnRJXrlwpnjhxQr7N/PnzxYiICPHHH38U9+7dK44fP15MS0sTKyoqvLjypuvll18Wo6OjxV9++UU8ffq0+N1334mhoaHi22+/Ld+Gj4kyDGRc0K9fP3HatGny50ajUUxKShLnzZvnxVVduXJzc0UA4vr160VRFMXCwkJRq9WK3333nXybw4cPiwDELVu2eGuZV4SSkhKxbdu24qpVq8Rhw4bJgQwfE+948sknxSFDhji83mQyiQkJCeJ//vMf+bLCwkJRp9OJX3311eVY4hXn2muvFe+55x6by2644QZxypQpoijyMXEFt5YUqq6uRnp6OkaNGiVfplKpMGrUKGzZssWLK7tyFRUVAQCioqIAAOnp6dDr9TaPUYcOHZCSksLHqJFNmzYN1157rc2/PcDHxFuWL1+OPn364MYbb0RcXBx69uyJDz/8UL7+9OnTuHDhgs3jEhERgf79+/NxaSSDBg3CmjVrcOzYMQDA3r17sXHjRowdOxYAHxNXNPlDIz0tLy8PRqMR8fHxNpfHx8fjyJEjXlrVlctkMmHmzJkYPHgwunTpAgC4cOECAgICEBkZaXPb+Ph4XLhwwQurvDJ8/fXX2LVrF3bs2FHnOj4m3nHq1Cm8//77mDVrFp5++mns2LEDjzzyCAICAjB16lT5397e8xkfl8bx1FNPobi4GB06dIBarYbRaMTLL7+MKVOmAAAfExcwkCG/Nm3aNBw4cAAbN2709lKuaJmZmZgxYwZWrVqFwMBAby+HLEwmE/r06YNXXnkFANCzZ08cOHAAixYtwtSpU728uivTt99+i6VLl+LLL79E586dsWfPHsycORNJSUl8TFzErSWFYmJioFar63Rb5OTkICEhwUurujJNnz4dv/zyC9atW4cWLVrIlyckJKC6uhqFhYU2t+dj1HjS09ORm5uLXr16QaPRQKPRYP369XjnnXeg0WgQHx/Px8QLEhMT0alTJ5vLOnbsiIyMDACQ/+35fHb5PP7443jqqadwyy23oGvXrrjjjjvw6KOPYt68eQD4mLiCgYxCAQEB6N27N9asWSNfZjKZsGbNGgwcONCLK7tyiKKI6dOn44cffsDatWuRlpZmc33v3r2h1WptHqOjR48iIyODj1EjGTlyJPbv3489e/bIH3369MGUKVPk/+djcvkNHjy4zmiCY8eOoWXLlgCAtLQ0JCQk2DwuxcXF2LZtGx+XRlJeXg6VyvalV61Ww2QyAeBj4hJvVxv7o6+//lrU6XTip59+Kh46dEj85z//KUZGRooXLlzw9tKuCA8++KAYEREh/vnnn2J2drb8UV5eLt/mgQceEFNSUsS1a9eKO3fuFAcOHCgOHDjQi6u+8lh3LYkiHxNv2L59u6jRaMSXX35ZPH78uLh06VIxODhY/OKLL+TbzJ8/X4yMjBR/+ukncd++feKECRPY6tuIpk6dKjZv3lxuv162bJkYExMjPvHEE/Jt+Jgow0DGRe+++66YkpIiBgQEiP369RO3bt3q7SVdMQDY/Vi8eLF8m4qKCvGhhx4SmzVrJgYHB4uTJk0Ss7OzvbfoK1DtQIaPiXf8/PPPYpcuXUSdTid26NBB/OCDD2yuN5lM4rPPPivGx8eLOp1OHDlypHj06FEvrbbpKy4uFmfMmCGmpKSIgYGBYqtWrcRnnnlGrKqqkm/Dx0QZQRStxgkSERER+RHWyBAREZHfYiBDREREfouBDBEREfktBjJERETktxjIEBERkd9iIENERER+i4EMERER+S0GMkREROS3GMgQUYPOnDkDQRCwZ88eby9FduTIEQwYMACBgYHo0aOHt5ejSGpqKt566y1vL4OoSWAgQ+QH7rrrLgiCgPnz59tc/uOPP0IQBC+tyruee+45hISE4OjRozYH7Fm76667MHHiRPnz4cOHY+bMmZdngQA+/fRTREZG1rl8x44d+Oc//3nZ1kHUlDGQIfITgYGBePXVV1FQUODtpXhMdXW1y/c9efIkhgwZgpYtWyI6OtqDq2qYO+sGgNjYWAQHB3toNURXNgYyRH5i1KhRSEhIwLx58xze5vnnn6+zzfLWW28hNTVV/lzKUrzyyiuIj49HZGQk5s6dC4PBgMcffxxRUVFo0aIFFi9eXOfrHzlyBIMGDUJgYCC6dOmC9evX21x/4MABjB07FqGhoYiPj8cdd9yBvLw8+frhw4dj+vTpmDlzJmJiYjBmzBi7P4fJZMLcuXPRokUL6HQ69OjRAytWrJCvFwQB6enpmDt3LgRBwPPPP1/Pv1zNz71+/Xq8/fbbEAQBgiDgzJkzbq37jTfeQNeuXRESEoLk5GQ89NBDKC0tBQD8+eefuPvuu1FUVCR/P2mdtbeWMjIyMGHCBISGhiI8PBw33XQTcnJy5Oulx/Xzzz9HamoqIiIicMstt6CkpES+zf/+9z907doVQUFBiI6OxqhRo1BWVtbgvwuRv2MgQ+Qn1Go1XnnlFbz77rs4d+6cW19r7dq1yMrKwoYNG/DGG2/gueeew3XXXYdmzZph27ZteOCBB3D//ffX+T6PP/44HnvsMezevRsDBw7E9ddfj0uXLgEACgsLcfXVV6Nnz57YuXMnVqxYgZycHNx00002X2PJkiUICAjApk2bsGjRIrvre/vtt/H666/jtddew759+zBmzBiMHz8ex48fBwBkZ2ejc+fOeOyxx5CdnY3Zs2c3+DO//fbbGDhwIO677z5kZ2cjOzsbycnJbq1bpVLhnXfewcGDB7FkyRKsXbsWTzzxBABg0KBBeOuttxAeHi5/P3vrNJlMmDBhAvLz87F+/XqsWrUKp06dws0332xzu5MnT+LHH3/EL7/8gl9++QXr16+Xtxqzs7Nx66234p577sHhw4fx559/4oYbbgDPBKYrgpdP3yYiJ0ydOlWcMGGCKIqiOGDAAPGee+4RRVEUf/jhB9H6z/i5554Tu3fvbnPfN998U2zZsqXN12rZsqVoNBrly9q3by9eddVV8ucGg0EMCQkRv/rqK1EURfH06dMiAHH+/PnybfR6vdiiRQvx1VdfFUVRFF988UVx9OjRNt87MzNTBCAePXpUFEVRHDZsmNizZ88Gf96kpCTx5Zdftrmsb9++4kMPPSR/3r17d/G5556r9+tY/7tJ33/GjBk2t/Hkur/77jsxOjpa/nzx4sViREREndu1bNlSfPPNN0VRFMU//vhDVKvVYkZGhnz9wYMHRQDi9u3bRVE0P67BwcFicXGxfJvHH39c7N+/vyiKopieni4CEM+cOdPgGomaGmZkiPzMq6++iiVLluDw4cMuf43OnTtDpar584+Pj0fXrl3lz9VqNaKjo5Gbm2tzv4EDB8r/r9Fo0KdPH3kde/fuxbp16xAaGip/dOjQAYA5myDp3bt3vWsrLi5GVlYWBg8ebHP54MGD3fqZHXFn3atXr8bIkSPRvHlzhIWF4Y477sClS5dQXl7u9Pc/fPgwkpOTkZycLF/WqVMnREZG2vy8qampCAsLkz9PTEyUH5/u3btj5MiR6Nq1K2688UZ8+OGHTaqWiqg+DGSI/MzQoUMxZswYzJkzp851KpWqznaCXq+vczutVmvzuSAIdi8zmUxOr6u0tBTXX3899uzZY/Nx/PhxDB06VL5dSEiI01/zcnB13WfOnMF1112Hbt264fvvv0d6ejoWLlwIwP1iYHvqe3zUajVWrVqF33//HZ06dcK7776L9u3b4/Tp0x5fB5GvYSBD5Ifmz5+Pn3/+GVu2bLG5PDY2FhcuXLAJZjw5+2Xr1q3y/xsMBqSnp6Njx44AgF69euHgwYNITU1FmzZtbD6UBC/h4eFISkrCpk2bbC7ftGkTOnXq5Nb6AwICYDQabS5zdd3p6ekwmUx4/fXXMWDAALRr1w5ZWVkNfr/aOnbsiMzMTGRmZsqXHTp0CIWFhYp+XkEQMHjwYLzwwgvYvXs3AgIC8MMPPzh9fyJ/xUCGyA917doVU6ZMwTvvvGNz+fDhw3Hx4kUsWLAAJ0+exMKFC/H777977PsuXLgQP/zwA44cOYJp06ahoKAA99xzDwBg2rRpyM/Px6233oodO3bg5MmTWLlyJe6+++4GX8xre/zxx/Hqq6/im2++wdGjR/HUU09hz549mDFjhlvrT01NxbZt23DmzBnk5eXBZDK5vO42bdpAr9fj3XffxalTp/D555/XKV5OTU1FaWkp1qxZg7y8PLtbTqNGjZIfz127dmH79u248847MWzYMPTp08epn2vbtm145ZVXsHPnTmRkZGDZsmW4ePGiHGQSNWUMZIj81Ny5c+ts/XTs2BH//e9/sXDhQnTv3h3bt293qqPHWfPnz8f8+fPRvXt3bNy4EcuXL0dMTAwAyFkUo9GI0aNHo2vXrpg5cyYiIyNt6nGc8cgjj2DWrFl47LHH0LVrV6xYsQLLly9H27Zt3Vr/7NmzoVar0alTJ8TGxiIjI8PldXfv3h1vvPEGXn31VXTp0gVLly6t0xo/aNAgPPDAA7j55psRGxuLBQsW1Pk6giDgp59+QrNmzTB06FCMGjUKrVq1wjfffOP0zxUeHo4NGzZg3LhxaNeuHf71r3/h9ddfx9ixY53/xyHyU4JYe0OdiIiIyE8wI0NERER+i4EMERER+S0GMkREROS3GMgQERGR32IgQ0RERH6LgQwRERH5LQYyRERE5LcYyBAREZHfYiBDREREfouBDBEREfktBjJERETkt/4fKqzHjbOGG+4AAAAASUVORK5CYII=", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "num_iterations = 90 # @param\n", "steps_per_loop = 1 # @param\n", "\n", "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n", " data_spec=agent.policy.trajectory_spec,\n", " batch_size=batch_size,\n", " max_length=steps_per_loop)\n", "\n", "observers = [replay_buffer.add_batch, regret_metric]\n", "\n", "driver = dynamic_step_driver.DynamicStepDriver(\n", " env=environment,\n", " policy=agent.collect_policy,\n", " num_steps=steps_per_loop * batch_size,\n", " observers=observers)\n", "\n", "regret_values = []\n", "\n", "for _ in range(num_iterations):\n", " driver.run()\n", " loss_info = agent.train(replay_buffer.gather_all())\n", " replay_buffer.clear()\n", " regret_values.append(regret_metric.result())\n", "\n", "plt.plot(regret_values)\n", "plt.ylabel('Average Regret')\n", "plt.xlabel('Number of Iterations')" ] }, { "cell_type": "markdown", "metadata": { "id": "J2diHS5IzLuo" }, "source": [ "After running the last code snippet, the resulting plot (hopefully) shows that the average regret is going down as the agent is trained and the policy gets better in figuring out what the right action is, given the observation." ] }, { "cell_type": "markdown", "metadata": { "id": "2qLMnOL00-2V" }, "source": [ "# What's Next?" ] }, { "cell_type": "markdown", "metadata": { "id": "FOiRWZbf1Drs" }, "source": [ "To see more working examples, please see the [bandits/agents/examples](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits/agents/examples/v2) that has ready-to-run examples for different agents and environments.\n", "\n", "The TF-Agents library is also capable of handling Multi-Armed Bandits with per-arm features. To that end, we refer the reader to the per-arm bandit [tutorial](https://github.com/tensorflow/agents/tree/master/docs/tutorials/per_arm_bandits_tutorial.ipynb)." ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "bandits_tutorial.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 0 }