{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "1Z6Wtb_jisbA" }, "source": [ "##### Copyright 2019 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "QUyRGn9riopB" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "H1yCdGFW4j_F" }, "source": [ "# Premade Estimators" ] }, { "cell_type": "markdown", "metadata": { "id": "PS6_yKSoyLAl" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "stQiPWL6ni6_" }, "source": [ "> Warning: TensorFlow 2.15 included the final release of the `tf-estimator` package. Estimators will not be available in TensorFlow 2.16 or after. See the [migration guide](https://tensorflow.org/guide/migrate/migrating_estimator) for more information about how to convert off of Estimators." ] }, { "cell_type": "markdown", "metadata": { "id": "R4YZ_ievcY7p" }, "source": [ "This tutorial shows you\n", "how to solve the Iris classification problem in TensorFlow using Estimators. An Estimator is a legacy TensorFlow high-level representation of a complete model. For more details see\n", "[Estimators](https://www.tensorflow.org/guide/estimator).\n", "\n", "Note: In TensorFlow 2.0, the [Keras API](https://www.tensorflow.org/guide/keras) can accomplish these same tasks, and is believed to be an easier API to learn. If you are starting fresh, it is recommended you start with Keras.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "8IFct0yedsTy" }, "source": [ "## First things first\n", "\n", "In order to get started, you will first import TensorFlow and a number of libraries you will need." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jPo5bQwndr9P" }, "outputs": [], "source": [ "import tensorflow as tf\n", "\n", "import pandas as pd" ] }, { "cell_type": "markdown", "metadata": { "id": "c5w4m5gncnGh" }, "source": [ "## The data set\n", "\n", "The sample program in this document builds and tests a model that\n", "classifies Iris flowers into three different species based on the size of their\n", "[sepals](https://en.wikipedia.org/wiki/Sepal) and\n", "[petals](https://en.wikipedia.org/wiki/Petal).\n", "\n", "\n", "You will train a model using the Iris data set. The Iris data set contains four features and one\n", "[label](https://developers.google.com/machine-learning/glossary/#label).\n", "The four features identify the following botanical characteristics of\n", "individual Iris flowers:\n", "\n", "* sepal length\n", "* sepal width\n", "* petal length\n", "* petal width\n", "\n", "Based on this information, you can define a few helpful constants for parsing the data:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lSyrXp_He_UE" }, "outputs": [], "source": [ "CSV_COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species']\n", "SPECIES = ['Setosa', 'Versicolor', 'Virginica']" ] }, { "cell_type": "markdown", "metadata": { "id": "j6mTfIQzfC9w" }, "source": [ "Next, download and parse the Iris data set using Keras and Pandas. Note that you keep distinct datasets for training and testing." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PumyCN8VdGGc" }, "outputs": [], "source": [ "train_path = tf.keras.utils.get_file(\n", " \"iris_training.csv\", \"https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv\")\n", "test_path = tf.keras.utils.get_file(\n", " \"iris_test.csv\", \"https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv\")\n", "\n", "train = pd.read_csv(train_path, names=CSV_COLUMN_NAMES, header=0)\n", "test = pd.read_csv(test_path, names=CSV_COLUMN_NAMES, header=0)" ] }, { "cell_type": "markdown", "metadata": { "id": "wHFxNLszhQjz" }, "source": [ "You can inspect your data to see that you have four float feature columns and one int32 label." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "WOJt-ML4hAwI" }, "outputs": [], "source": [ "train.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "jQJEYfVvfznP" }, "source": [ "For each of the datasets, split out the labels, which the model will be trained to predict." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zM0wz2TueuA6" }, "outputs": [], "source": [ "train_y = train.pop('Species')\n", "test_y = test.pop('Species')\n", "\n", "# The label column has now been removed from the features.\n", "train.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "jZx1L_1Vcmxv" }, "source": [ "## Overview of programming with Estimators\n", "\n", "Now that you have the data set up, you can define a model using a TensorFlow Estimator. An Estimator is any class derived from `tf.estimator.Estimator`. TensorFlow\n", "provides a collection of\n", "`tf.estimator`\n", "(for example, `LinearRegressor`) to implement common ML algorithms. Beyond\n", "those, you may write your own\n", "[custom Estimators](https://www.tensorflow.org/guide/estimator#custom_estimators).\n", "It is recommended using pre-made Estimators when just getting started.\n", "\n", "To write a TensorFlow program based on pre-made Estimators, you must perform the\n", "following tasks:\n", "\n", "* Create one or more input functions.\n", "* Define the model's feature columns.\n", "* Instantiate an Estimator, specifying the feature columns and various\n", " hyperparameters.\n", "* Call one or more methods on the Estimator object, passing the appropriate\n", " input function as the source of the data.\n", "\n", "Let's see how those tasks are implemented for Iris classification." ] }, { "cell_type": "markdown", "metadata": { "id": "2OcguDfBcmmg" }, "source": [ "## Create input functions\n", "\n", "You must create input functions to supply data for training,\n", "evaluating, and prediction.\n", "\n", "An **input function** is a function that returns a `tf.data.Dataset` object\n", "which outputs the following two-element tuple:\n", "\n", "* [`features`](https://developers.google.com/machine-learning/glossary/#feature) - A Python dictionary in which:\n", " * Each key is the name of a feature.\n", " * Each value is an array containing all of that feature's values.\n", "* `label` - An array containing the values of the\n", " [label](https://developers.google.com/machine-learning/glossary/#label) for\n", " every example.\n", "\n", "Just to demonstrate the format of the input function, here's a simple\n", "implementation:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nzr5vRr5caGF" }, "outputs": [], "source": [ "def input_evaluation_set():\n", " features = {'SepalLength': np.array([6.4, 5.0]),\n", " 'SepalWidth': np.array([2.8, 2.3]),\n", " 'PetalLength': np.array([5.6, 3.3]),\n", " 'PetalWidth': np.array([2.2, 1.0])}\n", " labels = np.array([2, 1])\n", " return features, labels" ] }, { "cell_type": "markdown", "metadata": { "id": "NpXvGjfnjHgY" }, "source": [ "Your input function may generate the `features` dictionary and `label` list any\n", "way you like. However, It is recommended using TensorFlow's [Dataset API](https://www.tensorflow.org/guide/datasets), which can\n", "parse all sorts of data.\n", "\n", "The Dataset API can handle a lot of common cases for you. For example,\n", "using the Dataset API, you can easily read in records from a large collection\n", "of files in parallel and join them into a single stream.\n", "\n", "To keep things simple in this example you are going to load the data with\n", "[pandas](https://pandas.pydata.org/), and build an input pipeline from this\n", "in-memory data:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "T20u1anCi8NP" }, "outputs": [], "source": [ "def input_fn(features, labels, training=True, batch_size=256):\n", " \"\"\"An input function for training or evaluating\"\"\"\n", " # Convert the inputs to a Dataset.\n", " dataset = tf.data.Dataset.from_tensor_slices((dict(features), labels))\n", "\n", " # Shuffle and repeat if you are in training mode.\n", " if training:\n", " dataset = dataset.shuffle(1000).repeat()\n", " \n", " return dataset.batch(batch_size)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "xIwcFT4MlZEi" }, "source": [ "## Define the feature columns\n", "\n", "A [**feature column**](https://developers.google.com/machine-learning/glossary/#feature_columns)\n", "is an object describing how the model should use raw input data from the\n", "features dictionary. When you build an Estimator model, you pass it a list of\n", "feature columns that describes each of the features you want the model to use.\n", "The `tf.feature_column` module provides many options for representing data\n", "to the model.\n", "\n", "For Iris, the 4 raw features are numeric values, so you'll build a list of\n", "feature columns to tell the Estimator model to represent each of the four\n", "features as 32-bit floating-point values. Therefore, the code to create the\n", "feature column is:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZTTriO8FlSML" }, "outputs": [], "source": [ "# Feature columns describe how to use the input.\n", "my_feature_columns = []\n", "for key in train.keys():\n", " my_feature_columns.append(tf.feature_column.numeric_column(key=key))" ] }, { "cell_type": "markdown", "metadata": { "id": "jpKkhMoZljco" }, "source": [ "Feature columns can be far more sophisticated than those shown here. You can read more about Feature Columns in [this guide](https://www.tensorflow.org/guide/feature_columns).\n", "\n", "Now that you have the description of how you want the model to represent the raw\n", "features, you can build the estimator." ] }, { "cell_type": "markdown", "metadata": { "id": "kuE59XHEl22K" }, "source": [ "## Instantiate an estimator\n", "\n", "The Iris problem is a classic classification problem. Fortunately, TensorFlow\n", "provides several pre-made classifier Estimators, including:\n", "\n", "* `tf.estimator.DNNClassifier` for deep models that perform multi-class\n", " classification.\n", "* `tf.estimator.DNNLinearCombinedClassifier` for wide & deep models.\n", "* `tf.estimator.LinearClassifier` for classifiers based on linear models.\n", "\n", "For the Iris problem, `tf.estimator.DNNClassifier` seems like the best choice.\n", "Here's how you instantiated this Estimator:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "qnf4o2V5lcPn" }, "outputs": [], "source": [ "# Build a DNN with 2 hidden layers with 30 and 10 hidden nodes each.\n", "classifier = tf.estimator.DNNClassifier(\n", " feature_columns=my_feature_columns,\n", " # Two hidden layers of 30 and 10 nodes respectively.\n", " hidden_units=[30, 10],\n", " # The model must choose between 3 classes.\n", " n_classes=3)" ] }, { "cell_type": "markdown", "metadata": { "id": "tzzt5nUpmEe3" }, "source": [ "## Train, Evaluate, and Predict\n", "\n", "Now that you have an Estimator object, you can call methods to do the following:\n", "\n", "* Train the model.\n", "* Evaluate the trained model.\n", "* Use the trained model to make predictions." ] }, { "cell_type": "markdown", "metadata": { "id": "rnihuLdWmE75" }, "source": [ "### Train the model\n", "\n", "Train the model by calling the Estimator's `train` method as follows:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4jW08YtPl1iS" }, "outputs": [], "source": [ "# Train the Model.\n", "classifier.train(\n", " input_fn=lambda: input_fn(train, train_y, training=True),\n", " steps=5000)" ] }, { "cell_type": "markdown", "metadata": { "id": "ybiTFDmlmes8" }, "source": [ "Note that you wrap up your `input_fn` call in a\n", "[`lambda`](https://docs.python.org/3/tutorial/controlflow.html)\n", "to capture the arguments while providing an input function that takes no\n", "arguments, as expected by the Estimator. The `steps` argument tells the method\n", "to stop training after a number of training steps.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "HNvJLH8hmsdf" }, "source": [ "### Evaluate the trained model\n", "\n", "Now that the model has been trained, you can get some statistics on its\n", "performance. The following code block evaluates the accuracy of the trained\n", "model on the test data:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "A169XuO4mKxF" }, "outputs": [], "source": [ "eval_result = classifier.evaluate(\n", " input_fn=lambda: input_fn(test, test_y, training=False))\n", "\n", "print('\\nTest set accuracy: {accuracy:0.3f}\\n'.format(**eval_result))" ] }, { "cell_type": "markdown", "metadata": { "id": "VnPMP5EHph17" }, "source": [ "Unlike the call to the `train` method, you did not pass the `steps`\n", "argument to evaluate. The `input_fn` for eval only yields a single\n", "[epoch](https://developers.google.com/machine-learning/glossary/#epoch) of data.\n", "\n", "\n", "The `eval_result` dictionary also contains the `average_loss` (mean loss per sample), the `loss` (mean loss per mini-batch) and the value of the estimator's `global_step` (the number of training iterations it underwent).\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ur624ibpp52X" }, "source": [ "### Making predictions (inferring) from the trained model\n", "\n", "You now have a trained model that produces good evaluation results.\n", "You can now use the trained model to predict the species of an Iris flower\n", "based on some unlabeled measurements. As with training and evaluation, you make\n", "predictions using a single function call:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wltc0jpgng38" }, "outputs": [], "source": [ "# Generate predictions from the model\n", "expected = ['Setosa', 'Versicolor', 'Virginica']\n", "predict_x = {\n", " 'SepalLength': [5.1, 5.9, 6.9],\n", " 'SepalWidth': [3.3, 3.0, 3.1],\n", " 'PetalLength': [1.7, 4.2, 5.4],\n", " 'PetalWidth': [0.5, 1.5, 2.1],\n", "}\n", "\n", "def input_fn(features, batch_size=256):\n", " \"\"\"An input function for prediction.\"\"\"\n", " # Convert the inputs to a Dataset without labels.\n", " return tf.data.Dataset.from_tensor_slices(dict(features)).batch(batch_size)\n", "\n", "predictions = classifier.predict(\n", " input_fn=lambda: input_fn(predict_x))" ] }, { "cell_type": "markdown", "metadata": { "id": "JsETKQo0rHvi" }, "source": [ "The `predict` method returns a Python iterable, yielding a dictionary of\n", "prediction results for each example. The following code prints a few\n", "predictions and their probabilities:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Efm4mLzkrCxp" }, "outputs": [], "source": [ "for pred_dict, expec in zip(predictions, expected):\n", " class_id = pred_dict['class_ids'][0]\n", " probability = pred_dict['probabilities'][class_id]\n", "\n", " print('Prediction is \"{}\" ({:.1f}%), expected \"{}\"'.format(\n", " SPECIES[class_id], 100 * probability, expec))" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "premade.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }