{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "wJcYs_ERTnnI"
},
"source": [
"##### Copyright 2021 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "HMUDt0CiUJk9"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "77z2OchJTk0l"
},
"source": [
"# Migrate metrics and optimizers\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "meUTrR4I6m1C"
},
"source": [
"In TF1, `tf.metrics` is the API namespace for all the metric functions. Each of the metrics is a function that takes `label` and `prediction` as input parameters and returns the corresponding metrics tensor as result. In TF2, `tf.keras.metrics` contains all the metric functions and objects. The `Metric` object can be used with `tf.keras.Model` and `tf.keras.layers.layer` to calculate metric values."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YdZSoIXEbhg-"
},
"source": [
"## Setup\n",
"\n",
"Let's start with a couple of necessary TensorFlow imports,"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iE0vSfMXumKI"
},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow.compat.v1 as tf1"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Jsm9Rxx7s1OZ"
},
"source": [
"and prepare some simple data for demonstration:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "m7rnGxsXtDkV"
},
"outputs": [],
"source": [
"features = [[1., 1.5], [2., 2.5], [3., 3.5]]\n",
"labels = [0, 0, 1]\n",
"eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]\n",
"eval_labels = [0, 1, 1]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xswk0d4xrFaQ"
},
"source": [
"## TF1: tf.compat.v1.metrics with Estimator\n",
"\n",
"In TF1, the metrics can be added to `EstimatorSpec` as the `eval_metric_ops`, and the op is generated via all the metrics functions defined in `tf.metrics`. You can follow the example to see how to use `tf.metrics.accuracy`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lqe9obf7suIj"
},
"outputs": [],
"source": [
"def _input_fn():\n",
" return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)\n",
"\n",
"def _eval_input_fn():\n",
" return tf1.data.Dataset.from_tensor_slices(\n",
" (eval_features, eval_labels)).batch(1)\n",
"\n",
"def _model_fn(features, labels, mode):\n",
" logits = tf1.layers.Dense(2)(features)\n",
" predictions = tf.math.argmax(input=logits, axis=1)\n",
" loss = tf1.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)\n",
" optimizer = tf1.train.AdagradOptimizer(0.05)\n",
" train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())\n",
" accuracy = tf1.metrics.accuracy(labels=labels, predictions=predictions)\n",
" return tf1.estimator.EstimatorSpec(mode, \n",
" predictions=predictions,\n",
" loss=loss, \n",
" train_op=train_op,\n",
" eval_metric_ops={'accuracy': accuracy})\n",
"\n",
"estimator = tf1.estimator.Estimator(model_fn=_model_fn)\n",
"estimator.train(_input_fn)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HsOpjW5plH9Q"
},
"outputs": [],
"source": [
"estimator.evaluate(_eval_input_fn)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Wk4C6qA_OaQx"
},
"source": [
"Also, metrics could be added to estimator directly via `tf.estimator.add_metrics()`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B2lpLOh9Owma"
},
"outputs": [],
"source": [
"def mean_squared_error(labels, predictions):\n",
" labels = tf.cast(labels, predictions.dtype)\n",
" return {\"mean_squared_error\": \n",
" tf1.metrics.mean_squared_error(labels=labels, predictions=predictions)}\n",
"\n",
"estimator = tf1.estimator.add_metrics(estimator, mean_squared_error)\n",
"estimator.evaluate(_eval_input_fn)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KEmzBjfnsxwT"
},
"source": [
"## TF2: Keras Metrics API with tf.keras.Model\n",
"\n",
"In TF2, `tf.keras.metrics` contains all the metrics classes and functions. They are designed in a OOP style and integrate closely with other `tf.keras` API. All the metrics can be found in `tf.keras.metrics` namespace, and there is usually a direct mapping between `tf.compat.v1.metrics` with `tf.keras.metrics`. \n",
"\n",
"In the following example, the metrics are added in `model.compile()` method. Users only need to create the metric instance, without specifying the label and prediction tensor. The Keras model will route the model output and label to the metrics object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "atVciNgPs0fw"
},
"outputs": [],
"source": [
"dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)\n",
"eval_dataset = tf.data.Dataset.from_tensor_slices(\n",
" (eval_features, eval_labels)).batch(1)\n",
"\n",
"inputs = tf.keras.Input((2,))\n",
"logits = tf.keras.layers.Dense(2)(inputs)\n",
"predictions = tf.math.argmax(input=logits, axis=1)\n",
"model = tf.keras.models.Model(inputs, predictions)\n",
"optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)\n",
"\n",
"model.compile(optimizer, loss='mse', metrics=[tf.keras.metrics.Accuracy()])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Kip65sYBlKiu"
},
"outputs": [],
"source": [
"model.evaluate(eval_dataset, return_dict=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_mcGoCm_X1V0"
},
"source": [
"With eager execution enabled, `tf.keras.metrics.Metric` instances can be directly used to evaluate numpy data or eager tensors. `tf.keras.metrics.Metric` objects are stateful containers. The metric value can be updated via `metric.update_state(y_true, y_pred)`, and the result can be retrieved by `metrics.result()`.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "TVGn5_IhYhtG"
},
"outputs": [],
"source": [
"accuracy = tf.keras.metrics.Accuracy()\n",
"\n",
"accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[0, 0, 0, 1])\n",
"accuracy.result().numpy()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wQEV2hHtY_su"
},
"outputs": [],
"source": [
"accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[0, 0, 0, 0])\n",
"accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[1, 1, 0, 0])\n",
"accuracy.result().numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E3F3ElcyadW-"
},
"source": [
"For more details about `tf.keras.metrics.Metric`, please take a look for the API documentation at `tf.keras.metrics.Metric`, as well as the [migration guide](https://www.tensorflow.org/guide/effective_tf2#new-style_metrics_and_losses)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eXKY9HEulxQC"
},
"source": [
"## Migrate TF1.x optimizers to Keras optimizers\n",
"\n",
"The optimizers in `tf.compat.v1.train`, such as the\n",
"[Adam optimizer](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/AdamOptimizer)\n",
"and the\n",
"[gradient descent optimizer](https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/GradientDescentOptimizer),\n",
"have equivalents in `tf.keras.optimizers`.\n",
"\n",
"The table below summarizes how you can convert these legacy optimizers to their\n",
"Keras equivalents. You can directly replace the TF1.x version with the TF2\n",
"version unless additional steps (such as\n",
"[updating the default learning rate](../../guide/effective_tf2.ipynb#optimizer_defaults))\n",
"are required.\n",
"\n",
"Note that converting your optimizers\n",
"[may make old checkpoints incompatible](./migrating_checkpoints.ipynb).\n",
"\n",
"\n",
" \n",
" TF1.x | \n",
" TF2 | \n",
" Additional steps | \n",
"
\n",
" \n",
" `tf.v1.train.GradientDescentOptimizer` | \n",
" `tf.keras.optimizers.SGD` | \n",
" None | \n",
"
\n",
" \n",
" `tf.v1.train.MomentumOptimizer` | \n",
" `tf.keras.optimizers.SGD` | \n",
" Include the `momentum` argument | \n",
"
\n",
" \n",
" `tf.v1.train.AdamOptimizer` | \n",
" `tf.keras.optimizers.Adam` | \n",
" Rename `beta1` and `beta2` arguments to `beta_1` and `beta_2` | \n",
"
\n",
" \n",
" `tf.v1.train.RMSPropOptimizer` | \n",
" `tf.keras.optimizers.RMSprop` | \n",
" Rename the `decay` argument to `rho` | \n",
"
\n",
" \n",
" `tf.v1.train.AdadeltaOptimizer` | \n",
" `tf.keras.optimizers.Adadelta` | \n",
" None | \n",
"
\n",
" \n",
" `tf.v1.train.AdagradOptimizer` | \n",
" `tf.keras.optimizers.Adagrad` | \n",
" None | \n",
"
\n",
" \n",
" `tf.v1.train.FtrlOptimizer` | \n",
" `tf.keras.optimizers.Ftrl` | \n",
" Remove the `accum_name` and `linear_name` arguments | \n",
"
\n",
" \n",
" `tf.contrib.AdamaxOptimizer` | \n",
" `tf.keras.optimizers.Adamax` | \n",
" Rename the `beta1`, and `beta2` arguments to `beta_1` and `beta_2` | \n",
"
\n",
" \n",
" `tf.contrib.Nadam` | \n",
" `tf.keras.optimizers.Nadam` | \n",
" Rename the `beta1`, and `beta2` arguments to `beta_1` and `beta_2` | \n",
"
\n",
"
\n",
"\n",
"Note: In TF2, all epsilons (numerical stability constants) now default to `1e-7`\n",
"instead of `1e-8`. This difference is negligible in most use cases."
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "metrics_optimizers.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}