{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "zwBCE43Cv3PH" }, "source": [ "##### Copyright 2019 The TensorFlow Authors.\n", "\n", "Licensed under the Apache License, Version 2.0 (the \"License\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "fOad0I2cv569" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "YQB7yiF6v9GR" }, "source": [ "# Load a pandas DataFrame" ] }, { "cell_type": "markdown", "metadata": { "id": "Oqa952X4wQKK" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "UmyEaf4Awl2v" }, "source": [ "This tutorial provides examples of how to load pandas DataFrames into TensorFlow.\n", "\n", "You will use a small heart disease dataset provided by the UCI Machine Learning Repository. There are several hundred rows in the CSV. Each row describes a patient, and each column describes an attribute. You will use this information to predict whether a patient has heart disease, which is a binary classification task." ] }, { "cell_type": "markdown", "metadata": { "id": "iiyC7HkqxlUD" }, "source": [ "## Read data using pandas" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5IoRbCA2n0_V" }, "outputs": [], "source": [ "import pandas as pd\n", "import tensorflow as tf\n", "\n", "SHUFFLE_BUFFER = 500\n", "BATCH_SIZE = 2" ] }, { "cell_type": "markdown", "metadata": { "id": "-2kBGy_pxn47" }, "source": [ "Download the CSV file containing the heart disease dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "VS4w2LePn9g3" }, "outputs": [], "source": [ "csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv')" ] }, { "cell_type": "markdown", "metadata": { "id": "6BXRPD2-xtQ1" }, "source": [ "Read the CSV file using pandas:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UEfJ8TcMpe-2" }, "outputs": [], "source": [ "df = pd.read_csv(csv_file)" ] }, { "cell_type": "markdown", "metadata": { "id": "4K873P-Pp8c7" }, "source": [ "This is what the data looks like:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8FkK6QIRpjd4" }, "outputs": [], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_MOAKz654CT5" }, "outputs": [], "source": [ "df.dtypes" ] }, { "cell_type": "markdown", "metadata": { "id": "jVyGjKvnqGlb" }, "source": [ "You will build models to predict the label contained in the `target` column." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2wwhILm1ycSp" }, "outputs": [], "source": [ "target = df.pop('target')" ] }, { "cell_type": "markdown", "metadata": { "id": "vFGv9fgjDeao" }, "source": [ "## A DataFrame as an array" ] }, { "cell_type": "markdown", "metadata": { "id": "xNxJ41MafiB-" }, "source": [ "If your data has a uniform datatype, or `dtype`, it's possible to use a pandas DataFrame anywhere you could use a NumPy array. This works because the `pandas.DataFrame` class supports the `__array__` protocol, and TensorFlow's `tf.convert_to_tensor` function accepts objects that support the protocol.\n", "\n", "Take the numeric features from the dataset (skip the categorical features for now):" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "b9VlFGAie3K0" }, "outputs": [], "source": [ "numeric_feature_names = ['age', 'thalach', 'trestbps', 'chol', 'oldpeak']\n", "numeric_features = df[numeric_feature_names]\n", "numeric_features.head()" ] }, { "cell_type": "markdown", "metadata": { "id": "Xe1CMRvSpR_R" }, "source": [ "The DataFrame can be converted to a NumPy array using the `DataFrame.values` property or `numpy.array(df)`. To convert it to a tensor, use `tf.convert_to_tensor`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OVv6Nwc9oDBU" }, "outputs": [], "source": [ "tf.convert_to_tensor(numeric_features)" ] }, { "cell_type": "markdown", "metadata": { "id": "7iRYvoTrr1_G" }, "source": [ "In general, if an object can be converted to a tensor with `tf.convert_to_tensor` it can be passed anywhere you can pass a `tf.Tensor`." ] }, { "cell_type": "markdown", "metadata": { "id": "RVF7_Z-Mp-qD" }, "source": [ "### With Model.fit" ] }, { "cell_type": "markdown", "metadata": { "id": "Vqkc9gIapQNu" }, "source": [ "A DataFrame, interpreted as a single tensor, can be used directly as an argument to the `Model.fit` method.\n", "\n", "Below is an example of training a model on the numeric features of the dataset." ] }, { "cell_type": "markdown", "metadata": { "id": "u8M3oYHZgH_t" }, "source": [ "The first step is to normalize the input ranges. Use a `tf.keras.layers.Normalization` layer for that.\n", "\n", "To set the layer's mean and standard-deviation before running it be sure to call the `Normalization.adapt` method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "88XTmyEdgkJn" }, "outputs": [], "source": [ "normalizer = tf.keras.layers.Normalization(axis=-1)\n", "normalizer.adapt(numeric_features)" ] }, { "cell_type": "markdown", "metadata": { "id": "_D7JqUtnYCnb" }, "source": [ "Call the layer on the first three rows of the DataFrame to visualize an example of the output from this layer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "jOwzIG-DhB0y" }, "outputs": [], "source": [ "normalizer(numeric_features.iloc[:3])" ] }, { "cell_type": "markdown", "metadata": { "id": "KWKcuVZJh-HY" }, "source": [ "Use the normalization layer as the first layer of a simple model:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lu-bni-nh6mX" }, "outputs": [], "source": [ "def get_basic_model():\n", " model = tf.keras.Sequential([\n", " normalizer,\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(1)\n", " ])\n", "\n", " model.compile(optimizer='adam',\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=['accuracy'])\n", " return model" ] }, { "cell_type": "markdown", "metadata": { "id": "ntGi6ngYitob" }, "source": [ "When you pass the DataFrame as the `x` argument to `Model.fit`, Keras treats the DataFrame as it would a NumPy array:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XMjM-eddiNNT" }, "outputs": [], "source": [ "model = get_basic_model()\n", "model.fit(numeric_features, target, epochs=15, batch_size=BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": { "id": "EjtQbsRPEoJT" }, "source": [ "### With tf.data" ] }, { "cell_type": "markdown", "metadata": { "id": "nSjV5gy3EsVv" }, "source": [ "If you want to apply `tf.data` transformations to a DataFrame of a uniform `dtype`, the `Dataset.from_tensor_slices` method will create a dataset that iterates over the rows of the DataFrame. Each row is initially a vector of values. To train a model, you need `(inputs, labels)` pairs, so pass `(features, labels)` and `Dataset.from_tensor_slices` will return the needed pairs of slices:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FCphpgdRGikx" }, "outputs": [], "source": [ "numeric_dataset = tf.data.Dataset.from_tensor_slices((numeric_features, target))\n", "\n", "for row in numeric_dataset.take(3):\n", " print(row)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lStkN86gEkCe" }, "outputs": [], "source": [ "numeric_batches = numeric_dataset.shuffle(1000).batch(BATCH_SIZE)\n", "\n", "model = get_basic_model()\n", "model.fit(numeric_batches, epochs=15)" ] }, { "cell_type": "markdown", "metadata": { "id": "NRASs9IIESWQ" }, "source": [ "## A DataFrame as a dictionary" ] }, { "cell_type": "markdown", "metadata": { "id": "NQcp7kiPF8TP" }, "source": [ "When you start dealing with heterogeneous data, it is no longer possible to treat the DataFrame as if it were a single array. TensorFlow tensors require that all elements have the same `dtype`.\n", "\n", "So, in this case, you need to start treating it as a dictionary of columns, where each column has a uniform `dtype`. A DataFrame is a lot like a dictionary of arrays, so typically all you need to do is cast the DataFrame to a Python dict. Many important TensorFlow APIs support (nested-)dictionaries of arrays as inputs." ] }, { "cell_type": "markdown", "metadata": { "id": "9y5UMKL8bury" }, "source": [ "`tf.data` input pipelines handle this quite well. All `tf.data` operations handle dictionaries and tuples automatically. So, to make a dataset of dictionary-examples from a DataFrame, just cast it to a dict before slicing it with `Dataset.from_tensor_slices`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "U3QDo-jwHYXc" }, "outputs": [], "source": [ "numeric_dict_ds = tf.data.Dataset.from_tensor_slices((dict(numeric_features), target))" ] }, { "cell_type": "markdown", "metadata": { "id": "yyEERK9ldIi_" }, "source": [ "Here are the first three examples from that dataset:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "q0tDwk0VdH6D" }, "outputs": [], "source": [ "for row in numeric_dict_ds.take(3):\n", " print(row)" ] }, { "cell_type": "markdown", "metadata": { "id": "DEAM6HAFxlMy" }, "source": [ "### Dictionaries with Keras" ] }, { "cell_type": "markdown", "metadata": { "id": "dnoyoWLWx07i" }, "source": [ "Typically, Keras models and layers expect a single input tensor, but these classes can accept and return nested structures of dictionaries, tuples and tensors. These structures are known as \"nests\" (refer to the `tf.nest` module for details).\n", "\n", "There are two equivalent ways you can write a Keras model that accepts a dictionary as input." ] }, { "cell_type": "markdown", "metadata": { "id": "5xUTrm0apDTr" }, "source": [ "#### 1. The Model-subclass style\n", "\n", "You write a subclass of `tf.keras.Model` (or `tf.keras.Layer`). You directly handle the inputs, and create the outputs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Zc3HV99CFRWL" }, "outputs": [], "source": [ " def stack_dict(inputs, fun=tf.stack):\n", " values = []\n", " for key in sorted(inputs.keys()):\n", " values.append(tf.cast(inputs[key], tf.float32))\n", "\n", " return fun(values, axis=-1)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Rz4Cg6WpzNzi" }, "outputs": [], "source": [ "#@title\n", "class MyModel(tf.keras.Model):\n", " def __init__(self):\n", " # Create all the internal layers in init.\n", " super().__init__()\n", "\n", " self.normalizer = tf.keras.layers.Normalization(axis=-1)\n", "\n", " self.seq = tf.keras.Sequential([\n", " self.normalizer,\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(1)\n", " ])\n", "\n", " def adapt(self, inputs):\n", " # Stack the inputs and `adapt` the normalization layer.\n", " inputs = stack_dict(inputs)\n", " self.normalizer.adapt(inputs)\n", "\n", " def call(self, inputs):\n", " # Stack the inputs\n", " inputs = stack_dict(inputs)\n", " # Run them through all the layers.\n", " result = self.seq(inputs)\n", "\n", " return result\n", "\n", "model = MyModel()\n", "\n", "model.adapt(dict(numeric_features))\n", "\n", "model.compile(optimizer='adam',\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=['accuracy'],\n", " run_eagerly=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "hMLXNEDF_tu2" }, "source": [ "This model can accept either a dictionary of columns or a dataset of dictionary-elements for training:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v3xEjtHY8gZG" }, "outputs": [], "source": [ "model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "73wgiTaVAA2F" }, "outputs": [], "source": [ "numeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)\n", "model.fit(numeric_dict_batches, epochs=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "-xDB3HLZGzAb" }, "source": [ "Here are the predictions for the first three examples:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xtolTQA-GpBW" }, "outputs": [], "source": [ "model.predict(dict(numeric_features.iloc[:3]))" ] }, { "cell_type": "markdown", "metadata": { "id": "QIIdxIYm13Ik" }, "source": [ "#### 2. The Keras functional style" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DG_bmO0sS_G5" }, "outputs": [], "source": [ "inputs = {}\n", "for name, column in numeric_features.items():\n", " inputs[name] = tf.keras.Input(\n", " shape=(1,), name=name, dtype=tf.float32)\n", "\n", "inputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9iXU9oem12dL" }, "outputs": [], "source": [ "x = stack_dict(inputs, fun=tf.concat)\n", "\n", "normalizer = tf.keras.layers.Normalization(axis=-1)\n", "normalizer.adapt(stack_dict(dict(numeric_features)))\n", "\n", "x = normalizer(x)\n", "x = tf.keras.layers.Dense(10, activation='relu')(x)\n", "x = tf.keras.layers.Dense(10, activation='relu')(x)\n", "x = tf.keras.layers.Dense(1)(x)\n", "\n", "model = tf.keras.Model(inputs, x)\n", "\n", "model.compile(optimizer='adam',\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=['accuracy'],\n", " run_eagerly=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "xrAxmuJrEwnf" }, "outputs": [], "source": [ "tf.keras.utils.plot_model(model, rankdir=\"LR\", show_shapes=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "UYtoAOIzCFY1" }, "source": [ "You can train the functional model the same way as the model subclass:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yAwjPq7I_ehX" }, "outputs": [], "source": [ "model.fit(dict(numeric_features), target, epochs=5, batch_size=BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "brwodxxVApO_" }, "outputs": [], "source": [ "numeric_dict_batches = numeric_dict_ds.shuffle(SHUFFLE_BUFFER).batch(BATCH_SIZE)\n", "model.fit(numeric_dict_batches, epochs=5)" ] }, { "cell_type": "markdown", "metadata": { "id": "xhn0Bt_Xw4nO" }, "source": [ "## Full example" ] }, { "cell_type": "markdown", "metadata": { "id": "zYQ5fDaRxRWQ" }, "source": [ "If you're passing a heterogeneous DataFrame to Keras, each column may need unique preprocessing. You could do this preprocessing directly in the DataFrame, but for a model to work correctly, inputs always need to be preprocessed the same way. So, the best approach is to build the preprocessing into the model. [Keras preprocessing layers](https://www.tensorflow.org/guide/keras/preprocessing_layers) cover many common tasks." ] }, { "cell_type": "markdown", "metadata": { "id": "BFsDZeu-BQ-h" }, "source": [ "### Build the preprocessing head" ] }, { "cell_type": "markdown", "metadata": { "id": "C6aVQN4Gw-Va" }, "source": [ "In this dataset some of the \"integer\" features in the raw data are actually Categorical indices. These indices are not really ordered numeric values (refer to the the dataset description for details). Because these are unordered they are inappropriate to feed directly to the model; the model would interpret them as being ordered. To use these inputs you'll need to encode them, either as one-hot vectors or embedding vectors. The same applies to string-categorical features.\n", "\n", "Note: If you have many features that need identical preprocessing it's more efficient to concatenate them together before applying the preprocessing.\n", "\n", "Binary features on the other hand do not generally need to be encoded or normalized.\n", "\n", "Start by by creating a list of the features that fall into each group:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IH2VCyLBPYX8" }, "outputs": [], "source": [ "binary_feature_names = ['sex', 'fbs', 'exang']" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Pxh4FPucOpDz" }, "outputs": [], "source": [ "categorical_feature_names = ['cp', 'restecg', 'slope', 'thal', 'ca']" ] }, { "cell_type": "markdown", "metadata": { "id": "HRcC8WkyamJb" }, "source": [ "The next step is to build a preprocessing model that will apply appropriate preprocessing to each input and concatenate the results.\n", "\n", "This section uses the [Keras Functional API](https://www.tensorflow.org/guide/keras/functional) to implement the preprocessing. You start by creating one `tf.keras.Input` for each column of the dataframe:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "D3OeiteJbWvI" }, "outputs": [], "source": [ "inputs = {}\n", "for name, column in df.items():\n", " if type(column[0]) == str:\n", " dtype = tf.string\n", " elif (name in categorical_feature_names or\n", " name in binary_feature_names):\n", " dtype = tf.int64\n", " else:\n", " dtype = tf.float32\n", "\n", " inputs[name] = tf.keras.Input(shape=(), name=name, dtype=dtype)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5N3vBMjidpx6" }, "outputs": [], "source": [ "inputs" ] }, { "cell_type": "markdown", "metadata": { "id": "_EEmzxinyhI4" }, "source": [ "For each input you'll apply some transformations using Keras layers and TensorFlow ops. Each feature starts as a batch of scalars (`shape=(batch,)`). The output for each should be a batch of `tf.float32` vectors (`shape=(batch, n)`). The last step will concatenate all those vectors together.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ubBDazjNFWiF" }, "source": [ "#### Binary inputs\n", "\n", "Since the binary inputs don't need any preprocessing, just add the vector axis, cast them to `float32` and add them to the list of preprocessed inputs:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "tmAIkOIid-Mp" }, "outputs": [], "source": [ "preprocessed = []\n", "\n", "for name in binary_feature_names:\n", " inp = inputs[name]\n", " inp = inp[:, tf.newaxis]\n", " float_value = tf.cast(inp, tf.float32)\n", " preprocessed.append(float_value)\n", "\n", "preprocessed" ] }, { "cell_type": "markdown", "metadata": { "id": "ZHQcdtG1GN7E" }, "source": [ "#### Numeric inputs\n", "\n", "Like in the earlier section you'll want to run these numeric inputs through a `tf.keras.layers.Normalization` layer before using them. The difference is that this time they're input as a dict. The code below collects the numeric features from the DataFrame, stacks them together and passes those to the `Normalization.adapt` method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UC9LaIBNIK5V" }, "outputs": [], "source": [ "normalizer = tf.keras.layers.Normalization(axis=-1)\n", "normalizer.adapt(stack_dict(dict(numeric_features)))" ] }, { "cell_type": "markdown", "metadata": { "id": "S537tideIpeh" }, "source": [ "The code below stacks the numeric features and runs them through the normalization layer." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "U8MJiFpPK5uD" }, "outputs": [], "source": [ "numeric_inputs = {}\n", "for name in numeric_feature_names:\n", " numeric_inputs[name]=inputs[name]\n", "\n", "numeric_inputs = stack_dict(numeric_inputs)\n", "numeric_normalized = normalizer(numeric_inputs)\n", "\n", "preprocessed.append(numeric_normalized)\n", "\n", "preprocessed" ] }, { "cell_type": "markdown", "metadata": { "id": "G5f-VzASKPF7" }, "source": [ "#### Categorical features" ] }, { "cell_type": "markdown", "metadata": { "id": "Z3wcFs1oKVao" }, "source": [ "To use categorical features you'll first need to encode them into either binary vectors or embeddings. Since these features only contain a small number of categories, convert the inputs directly to one-hot vectors using the `output_mode='one_hot'` option, supported by both the `tf.keras.layers.StringLookup` and `tf.keras.layers.IntegerLookup` layers.\n", "\n", "Here is an example of how these layers work:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vXleJfBRS9xr" }, "outputs": [], "source": [ "vocab = ['a','b','c']\n", "lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')\n", "lookup(['c','a','a','b','zzz'])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kRnsFYJiSVmH" }, "outputs": [], "source": [ "vocab = [1,4,7,99]\n", "lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')\n", "\n", "lookup([-1,4,1])" ] }, { "cell_type": "markdown", "metadata": { "id": "est6aCFBZDVs" }, "source": [ "To determine the vocabulary for each input, create a layer to convert that vocabulary to a one-hot vector:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HELhoFlo0H9Q" }, "outputs": [], "source": [ "for name in categorical_feature_names:\n", " vocab = sorted(set(df[name]))\n", " print(f'name: {name}')\n", " print(f'vocab: {vocab}\\n')\n", "\n", " if type(vocab[0]) is str:\n", " lookup = tf.keras.layers.StringLookup(vocabulary=vocab, output_mode='one_hot')\n", " else:\n", " lookup = tf.keras.layers.IntegerLookup(vocabulary=vocab, output_mode='one_hot')\n", "\n", " x = inputs[name][:, tf.newaxis]\n", " x = lookup(x)\n", " preprocessed.append(x)" ] }, { "cell_type": "markdown", "metadata": { "id": "PzMMkwNBa2pK" }, "source": [ "#### Assemble the preprocessing head" ] }, { "cell_type": "markdown", "metadata": { "id": "GaQ-_pEQbCE8" }, "source": [ "At this point `preprocessed` is just a Python list of all the preprocessing results, each result has a shape of `(batch_size, depth)`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LlLaq_BVRlnO" }, "outputs": [], "source": [ "preprocessed" ] }, { "cell_type": "markdown", "metadata": { "id": "U9lYYHIXbYv-" }, "source": [ "Concatenate all the preprocessed features along the `depth` axis, so each dictionary-example is converted into a single vector. The vector contains categorical features, numeric features, and categorical one-hot features:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "j2I8vpQh313w" }, "outputs": [], "source": [ "preprocessed_result = tf.concat(preprocessed, axis=-1)\n", "preprocessed_result" ] }, { "cell_type": "markdown", "metadata": { "id": "OBFowyJtb0WB" }, "source": [ "Now create a model out of that calculation so it can be reused:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rHQBFHwE37TO" }, "outputs": [], "source": [ "preprocessor = tf.keras.Model(inputs, preprocessed_result)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ViMARQ-f6zfx" }, "outputs": [], "source": [ "tf.keras.utils.plot_model(preprocessor, rankdir=\"LR\", show_shapes=True)" ] }, { "cell_type": "markdown", "metadata": { "id": "IURRtL_WZbht" }, "source": [ "To test the preprocessor, use the DataFrame.iloc accessor to slice the first example from the DataFrame. Then convert it to a dictionary and pass the dictionary to the preprocessor. The result is a single vector containing the binary features, normalized numeric features and the one-hot categorical features, in that order:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "QjBzCKsZUj0y" }, "outputs": [], "source": [ "preprocessor(dict(df.iloc[:1]))" ] }, { "cell_type": "markdown", "metadata": { "id": "bB9C0XJkyQEk" }, "source": [ "### Create and train a model" ] }, { "cell_type": "markdown", "metadata": { "id": "WfU_FFXMbKGM" }, "source": [ "Now build the main body of the model. Use the same configuration as in the previous example: A couple of `Dense` rectified-linear layers and a `Dense(1)` output layer for the classification." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "75OxXTnfboKN" }, "outputs": [], "source": [ "body = tf.keras.Sequential([\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(10, activation='relu'),\n", " tf.keras.layers.Dense(1)\n", "])" ] }, { "cell_type": "markdown", "metadata": { "id": "MpD6WNX5_zh5" }, "source": [ "Now put the two pieces together using the Keras functional API." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_TY_BuVMbNcB" }, "outputs": [], "source": [ "inputs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "iin2kvA9bDpz" }, "outputs": [], "source": [ "x = preprocessor(inputs)\n", "x" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FQd9PcPRpkP4" }, "outputs": [], "source": [ "result = body(x)\n", "result" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v_KerrXabhgP" }, "outputs": [], "source": [ "model = tf.keras.Model(inputs, result)\n", "\n", "model.compile(optimizer='adam',\n", " loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n", " metrics=['accuracy'])" ] }, { "cell_type": "markdown", "metadata": { "id": "S1MR-XD9kC6C" }, "source": [ "This model expects a dictionary of inputs. The simplest way to pass it the data is to convert the DataFrame to a dict and pass that dict as the `x` argument to `Model.fit`:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ybDzNUheqxJw" }, "outputs": [], "source": [ "history = model.fit(dict(df), target, epochs=5, batch_size=BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": { "id": "dacoEIB_BSsL" }, "source": [ "Using `tf.data` works as well:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "rYadV3wwE4G3" }, "outputs": [], "source": [ "ds = tf.data.Dataset.from_tensor_slices((\n", " dict(df),\n", " target\n", "))\n", "\n", "ds = ds.batch(BATCH_SIZE)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2YIpp2r0bv-6" }, "outputs": [], "source": [ "import pprint\n", "\n", "for x, y in ds.take(1):\n", " pprint.pprint(x)\n", " print()\n", " print(y)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NMT-AevGFmdu" }, "outputs": [], "source": [ "history = model.fit(ds, epochs=5)" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "pandas_dataframe.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }