{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "wdeKOEkv1Fe8" }, "source": [ "##### Copyright 2021 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2024-08-02T09:18:42.185879Z", "iopub.status.busy": "2024-08-02T09:18:42.185308Z", "iopub.status.idle": "2024-08-02T09:18:42.189092Z", "shell.execute_reply": "2024-08-02T09:18:42.188447Z" }, "id": "c2jyGuiG1gHr" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "23R0Z9RojXYW" }, "source": [ "# TFX Estimator Component Tutorial\n", "\n", "***A Component-by-Component Introduction to TensorFlow Extended (TFX)***" ] }, { "cell_type": "markdown", "metadata": { "id": "LidV2qsXm4XC" }, "source": [ "Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click \"Run in Google Colab\".\n", "\n", "
\n", "\n", "\n", "\n", "\n", "
\n", "View on TensorFlow.org\n", "Run in Google Colab\n", "View source on GitHub\n", "Download notebook
" ] }, { "cell_type": "markdown", "metadata": { "id": "RBbTLeWmWs8q" }, "source": [ "> Warning: Estimators are not recommended for new code. Estimators run `v1.Session`-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our [compatibility guarantees](https://tensorflow.org/guide/versions), but will receive no fixes other than security vulnerabilities. See the [migration guide](https://tensorflow.org/guide/migrate) for details." ] }, { "cell_type": "markdown", "metadata": { "id": "KAD1tLoTm_QS" }, "source": [ "\n", "This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).\n", "\n", "It covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.\n", "\n", "When you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.\n", "\n", "Note: This notebook and its associated APIs are **experimental** and are\n", "in active development. Major changes in functionality, behavior, and\n", "presentation are expected." ] }, { "cell_type": "markdown", "metadata": { "id": "sfSQ-kX-MLEr" }, "source": [ "## Background\n", "This notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.\n", "\n", "Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.\n", "\n", "### Orchestration\n", "\n", "In a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.\n", "\n", "### Metadata\n", "\n", "In a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the `/tmp` directory on the Jupyter notebook or Colab server." ] }, { "cell_type": "markdown", "metadata": { "id": "2GivNBNYjb3b" }, "source": [ "## Setup\n", "First, we install and import the necessary packages, set up paths, and download data." ] }, { "cell_type": "markdown", "metadata": { "id": "cDl_6DkqJ-pG" }, "source": [ "### Upgrade Pip\n", "\n", "To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:42.193209Z", "iopub.status.busy": "2024-08-02T09:18:42.192696Z", "iopub.status.idle": "2024-08-02T09:18:42.200590Z", "shell.execute_reply": "2024-08-02T09:18:42.200030Z" }, "id": "tFhBChv4J_PD" }, "outputs": [], "source": [ "try:\n", " import colab\n", " !pip install --upgrade pip\n", "except:\n", " pass" ] }, { "cell_type": "markdown", "metadata": { "id": "MZOYTt1RW4TK" }, "source": [ "### Install TFX\n", "\n", "**Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).**" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:42.203849Z", "iopub.status.busy": "2024-08-02T09:18:42.203275Z", "iopub.status.idle": "2024-08-02T09:18:44.600380Z", "shell.execute_reply": "2024-08-02T09:18:44.599547Z" }, "id": "S4SQA7Q5nej3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: tfx<1.16 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (1.15.1)\r\n", "Requirement already satisfied: ml-pipelines-sdk==1.15.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.15.1)\r\n", "Requirement already satisfied: absl-py<2.0.0,>=0.9 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.4.0)\r\n", "Requirement already satisfied: ml-metadata<1.16.0,>=1.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.15.0)\r\n", "Requirement already satisfied: packaging>=22 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (24.1)\r\n", "Requirement already satisfied: portpicker<2,>=1.3.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.6.0)\r\n", "Requirement already satisfied: protobuf<5,>=3.20.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (3.20.3)\r\n", "Requirement already satisfied: docker<5,>=4.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (4.4.4)\r\n", "Requirement already satisfied: google-apitools<1,>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (0.5.31)\r\n", "Requirement already satisfied: google-api-python-client<2,>=1.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.12.11)\r\n", "Requirement already satisfied: jinja2<4,>=2.7.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (3.1.4)\r\n", "Requirement already satisfied: typing-extensions<5,>=3.10.0.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (4.12.2)\r\n", "Requirement already satisfied: apache-beam<3,>=2.47 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.57.0)\r\n", "Requirement already satisfied: attrs<24,>=19.3.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (23.2.0)\r\n", "Requirement already satisfied: click<9,>=7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (8.1.7)\r\n", "Requirement already satisfied: google-api-core<3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (2.19.1)\r\n", "Requirement already satisfied: google-cloud-aiplatform<2,>=1.6.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.60.0)\r\n", "Requirement already satisfied: google-cloud-bigquery<4,>=3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (3.25.0)\r\n", "Requirement already satisfied: grpcio<2,>=1.28.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.65.2)\r\n", "Requirement already satisfied: keras-tuner!=1.4.0,!=1.4.1,<2,>=1.0.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.4.7)\r\n", "Requirement already satisfied: kubernetes<13,>=10.0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (12.0.1)\r\n", "Requirement already satisfied: numpy<2,>=1.16 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.26.4)\r\n", "Requirement already satisfied: pyarrow<11,>=10 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (10.0.1)\r\n", "Requirement already satisfied: scipy<1.13 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.12.0)\r\n", "Requirement already satisfied: pyyaml<7,>=6 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (6.0.1)\r\n", "Requirement already satisfied: tensorflow<2.16,>=2.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (2.15.1)\r\n", "Requirement already satisfied: tensorflow-hub<0.16,>=0.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (0.15.0)\r\n", "Requirement already satisfied: tensorflow-data-validation<1.16.0,>=1.15.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.15.1)\r\n", "Requirement already satisfied: tensorflow-model-analysis<0.47.0,>=0.46.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (0.46.0)\r\n", "Requirement already satisfied: tensorflow-serving-api<2.16,>=2.15 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (2.15.1)\r\n", "Requirement already satisfied: tensorflow-transform<1.16.0,>=1.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.15.0)\r\n", "Requirement already satisfied: tfx-bsl<1.16.0,>=1.15.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tfx<1.16) (1.15.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: crcmod<2.0,>=1.7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.7)\r\n", "Requirement already satisfied: orjson<4,>=3.9.7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.10.6)\r\n", "Requirement already satisfied: dill<0.3.2,>=0.3.1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.3.1.1)\r\n", "Requirement already satisfied: cloudpickle~=2.2.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.2.1)\r\n", "Requirement already satisfied: fastavro<2,>=0.23.6 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.9.5)\r\n", "Requirement already satisfied: fasteners<1.0,>=0.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.19)\r\n", "Requirement already satisfied: hdfs<3.0.0,>=2.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.7.3)\r\n", "Requirement already satisfied: httplib2<0.23.0,>=0.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.22.0)\r\n", "Requirement already satisfied: jsonschema<5.0.0,>=4.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (4.23.0)\r\n", "Requirement already satisfied: jsonpickle<4.0.0,>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.2.2)\r\n", "Requirement already satisfied: objsize<0.8.0,>=0.6.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.7.0)\r\n", "Requirement already satisfied: pymongo<5.0.0,>=3.8.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (4.8.0)\r\n", "Requirement already satisfied: proto-plus<2,>=1.7.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.24.0)\r\n", "Requirement already satisfied: pydot<2,>=1.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.4.2)\r\n", "Requirement already satisfied: python-dateutil<3,>=2.8.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.9.0.post0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: pytz>=2018.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2024.1)\r\n", "Requirement already satisfied: redis<6,>=5.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (5.0.8)\r\n", "Requirement already satisfied: regex>=2020.6.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2024.7.24)\r\n", "Requirement already satisfied: requests!=2.32.*,<3.0.0,>=2.24.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.31.0)\r\n", "Requirement already satisfied: zstandard<1,>=0.18.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.23.0)\r\n", "Requirement already satisfied: pyarrow-hotfix<1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.6)\r\n", "Requirement already satisfied: js2py<1,>=0.74 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.74)\r\n", "Requirement already satisfied: cachetools<6,>=3.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (5.4.0)\r\n", "Requirement already satisfied: google-auth<3,>=1.18.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.32.0)\r\n", "Requirement already satisfied: google-auth-httplib2<0.3.0,>=0.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.2.0)\r\n", "Requirement already satisfied: google-cloud-datastore<3,>=2.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.19.0)\r\n", "Requirement already satisfied: google-cloud-pubsub<3,>=2.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.23.0)\r\n", "Requirement already satisfied: google-cloud-pubsublite<2,>=1.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.11.1)\r\n", "Requirement already satisfied: google-cloud-storage<3,>=2.16.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.18.0)\r\n", "Requirement already satisfied: google-cloud-bigquery-storage<3,>=2.6.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.25.0)\r\n", "Requirement already satisfied: google-cloud-core<3,>=2.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.4.1)\r\n", "Requirement already satisfied: google-cloud-bigtable<3,>=2.19.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.25.0)\r\n", "Requirement already satisfied: google-cloud-spanner<4,>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.48.0)\r\n", "Requirement already satisfied: google-cloud-dlp<4,>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.20.0)\r\n", "Requirement already satisfied: google-cloud-language<3,>=2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.14.0)\r\n", "Requirement already satisfied: google-cloud-videointelligence<3,>=2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.13.5)\r\n", "Requirement already satisfied: google-cloud-vision<4,>=2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.7.4)\r\n", "Requirement already satisfied: google-cloud-recommendations-ai<0.11.0,>=0.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.10.12)\r\n", "Requirement already satisfied: six>=1.4.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from docker<5,>=4.1->tfx<1.16) (1.16.0)\r\n", "Requirement already satisfied: websocket-client>=0.32.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from docker<5,>=4.1->tfx<1.16) (1.8.0)\r\n", "Requirement already satisfied: googleapis-common-protos<2.0.dev0,>=1.56.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-api-core<3->tfx<1.16) (1.63.2)\r\n", "Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-api-python-client<2,>=1.8->tfx<1.16) (3.0.1)\r\n", "Requirement already satisfied: oauth2client>=1.4.12 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-apitools<1,>=0.5->tfx<1.16) (4.1.3)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: google-cloud-resource-manager<3.0.0dev,>=1.3.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-aiplatform<2,>=1.6.2->tfx<1.16) (1.12.5)\r\n", "Requirement already satisfied: shapely<3.0.0dev in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-aiplatform<2,>=1.6.2->tfx<1.16) (2.0.5)\r\n", "Requirement already satisfied: pydantic<3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-aiplatform<2,>=1.6.2->tfx<1.16) (1.10.17)\r\n", "Requirement already satisfied: docstring-parser<1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-aiplatform<2,>=1.6.2->tfx<1.16) (0.16)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: google-resumable-media<3.0dev,>=0.6.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-bigquery<4,>=3->tfx<1.16) (2.7.1)\r\n", "Requirement already satisfied: MarkupSafe>=2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jinja2<4,>=2.7.3->tfx<1.16) (2.1.5)\r\n", "Requirement already satisfied: keras in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from keras-tuner!=1.4.0,!=1.4.1,<2,>=1.0.4->tfx<1.16) (2.15.0)\r\n", "Requirement already satisfied: kt-legacy in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from keras-tuner!=1.4.0,!=1.4.1,<2,>=1.0.4->tfx<1.16) (1.0.5)\r\n", "Requirement already satisfied: certifi>=14.05.14 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from kubernetes<13,>=10.0.1->tfx<1.16) (2024.7.4)\r\n", "Requirement already satisfied: setuptools>=21.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from kubernetes<13,>=10.0.1->tfx<1.16) (72.1.0)\r\n", "Requirement already satisfied: requests-oauthlib in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from kubernetes<13,>=10.0.1->tfx<1.16) (2.0.0)\r\n", "Requirement already satisfied: urllib3>=1.24.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from kubernetes<13,>=10.0.1->tfx<1.16) (1.26.19)\r\n", "Requirement already satisfied: psutil in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from portpicker<2,>=1.3.1->tfx<1.16) (6.0.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: astunparse>=1.6.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (1.6.3)\r\n", "Requirement already satisfied: flatbuffers>=23.5.26 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (24.3.25)\r\n", "Requirement already satisfied: gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (0.6.0)\r\n", "Requirement already satisfied: google-pasta>=0.1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (0.2.0)\r\n", "Requirement already satisfied: h5py>=2.9.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (3.11.0)\r\n", "Requirement already satisfied: libclang>=13.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (18.1.1)\r\n", "Requirement already satisfied: ml-dtypes~=0.3.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (0.3.2)\r\n", "Requirement already satisfied: opt-einsum>=2.3.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (3.3.0)\r\n", "Requirement already satisfied: termcolor>=1.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (2.4.0)\r\n", "Requirement already satisfied: wrapt<1.15,>=1.11.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (1.14.1)\r\n", "Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (0.37.1)\r\n", "Requirement already satisfied: tensorboard<2.16,>=2.15 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (2.15.2)\r\n", "Requirement already satisfied: tensorflow-estimator<2.16,>=2.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow<2.16,>=2.15.0->tfx<1.16) (2.15.0)\r\n", "Requirement already satisfied: joblib>=1.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-data-validation<1.16.0,>=1.15.1->tfx<1.16) (1.4.2)\r\n", "Requirement already satisfied: pandas<2,>=1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-data-validation<1.16.0,>=1.15.1->tfx<1.16) (1.5.3)\r\n", "Requirement already satisfied: pyfarmhash<0.4,>=0.2.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-data-validation<1.16.0,>=1.15.1->tfx<1.16) (0.3.2)\r\n", "Requirement already satisfied: tensorflow-metadata<1.16,>=1.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-data-validation<1.16.0,>=1.15.1->tfx<1.16) (1.15.0)\r\n", "Requirement already satisfied: ipython<8,>=7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (7.34.0)\r\n", "Requirement already satisfied: ipywidgets<8,>=7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (7.8.3)\r\n", "Requirement already satisfied: pillow>=9.4.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (10.4.0)\r\n", "Requirement already satisfied: rouge-score<2,>=0.1.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.1.2)\r\n", "Requirement already satisfied: sacrebleu<4,>=2.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.4.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: wheel<1.0,>=0.23.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from astunparse>=1.6.0->tensorflow<2.16,>=2.15.0->tfx<1.16) (0.43.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: grpcio-status<2.0.dev0,>=1.33.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-api-core[grpc]!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,!=2.7.*,<3.0.0dev,>=1.34.1->google-cloud-aiplatform<2,>=1.6.2->tfx<1.16) (1.48.2)\r\n", "Requirement already satisfied: pyasn1-modules>=0.2.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-auth<3,>=1.18.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.4.0)\r\n", "Requirement already satisfied: rsa<5,>=3.1.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-auth<3,>=1.18.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (4.9)\r\n", "Requirement already satisfied: grpc-google-iam-v1<1.0.0dev,>=0.12.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-bigtable<3,>=2.19.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.13.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: overrides<8.0.0,>=6.0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-pubsublite<2,>=1.2.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (7.7.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: sqlparse>=0.4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-spanner<4,>=3.0.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.5.1)\r\n", "Requirement already satisfied: grpc-interceptor>=0.15.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-spanner<4,>=3.0.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.15.4)\r\n", "Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from google-cloud-storage<3,>=2.16.0->apache-beam[gcp]<3,>=2.47->tfx<1.16) (1.5.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: docopt in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.6.2)\r\n", "Requirement already satisfied: pyparsing!=3.0.0,!=3.0.1,!=3.0.2,!=3.0.3,<4,>=2.4.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from httplib2<0.23.0,>=0.8->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.1.2)\r\n", "Requirement already satisfied: jedi>=0.16 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.19.1)\r\n", "Requirement already satisfied: decorator in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (5.1.1)\r\n", "Requirement already satisfied: pickleshare in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.7.5)\r\n", "Requirement already satisfied: traitlets>=4.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (5.14.3)\r\n", "Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (3.0.47)\r\n", "Requirement already satisfied: pygments in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.18.0)\r\n", "Requirement already satisfied: backcall in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.2.0)\r\n", "Requirement already satisfied: matplotlib-inline in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.1.7)\r\n", "Requirement already satisfied: pexpect>4.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.9.0)\r\n", "Requirement already satisfied: comm>=0.1.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.2.2)\r\n", "Requirement already satisfied: ipython-genutils~=0.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.2.0)\r\n", "Requirement already satisfied: widgetsnbextension~=3.6.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (3.6.8)\r\n", "Requirement already satisfied: jupyterlab-widgets<3,>=1.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.1.9)\r\n", "Requirement already satisfied: tzlocal>=1.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from js2py<1,>=0.74->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (5.2)\r\n", "Requirement already satisfied: pyjsparser>=2.5.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from js2py<1,>=0.74->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.7.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema<5.0.0,>=4.0.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2023.12.1)\r\n", "Requirement already satisfied: referencing>=0.28.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema<5.0.0,>=4.0.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.35.1)\r\n", "Requirement already satisfied: rpds-py>=0.7.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema<5.0.0,>=4.0.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (0.19.1)\r\n", "Requirement already satisfied: pyasn1>=0.1.7 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from oauth2client>=1.4.12->google-apitools<1,>=0.5->tfx<1.16) (0.6.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: dnspython<3.0.0,>=1.16.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from pymongo<5.0.0,>=3.8.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (2.6.1)\r\n", "Requirement already satisfied: async-timeout>=4.0.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from redis<6,>=5.0.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (4.0.3)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: charset-normalizer<4,>=2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests!=2.32.*,<3.0.0,>=2.24.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.3.2)\r\n", "Requirement already satisfied: idna<4,>=2.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests!=2.32.*,<3.0.0,>=2.24.0->apache-beam<3,>=2.47->apache-beam[gcp]<3,>=2.47->tfx<1.16) (3.7)\r\n", "Requirement already satisfied: nltk in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from rouge-score<2,>=0.1.2->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (3.8.1)\r\n", "Requirement already satisfied: portalocker in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from sacrebleu<4,>=2.3->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.10.1)\r\n", "Requirement already satisfied: tabulate>=0.8.9 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from sacrebleu<4,>=2.3->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.9.0)\r\n", "Requirement already satisfied: colorama in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from sacrebleu<4,>=2.3->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.4.6)\r\n", "Requirement already satisfied: lxml in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from sacrebleu<4,>=2.3->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (5.2.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: google-auth-oauthlib<2,>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (1.2.1)\r\n", "Requirement already satisfied: markdown>=2.6.8 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (3.6)\r\n", "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (0.7.2)\r\n", "Requirement already satisfied: werkzeug>=1.0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (3.0.3)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: oauthlib>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from requests-oauthlib->kubernetes<13,>=10.0.1->tfx<1.16) (3.2.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: parso<0.9.0,>=0.8.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jedi>=0.16->ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.8.4)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: importlib-metadata>=4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from markdown>=2.6.8->tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (8.2.0)\r\n", "Requirement already satisfied: ptyprocess>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from pexpect>4.3->ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.7.0)\r\n", "Requirement already satisfied: wcwidth in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.2.13)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: notebook>=4.4.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (7.2.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: tqdm in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nltk->rouge-score<2,>=0.1.2->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.66.4)\r\n", "Requirement already satisfied: zipp>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.16,>=2.15->tensorflow<2.16,>=2.15.0->tfx<1.16) (3.19.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: jupyter-server<3,>=2.4.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.14.2)\r\n", "Requirement already satisfied: jupyterlab-server<3,>=2.27.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.27.3)\r\n", "Requirement already satisfied: jupyterlab<4.3,>=4.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.2.4)\r\n", "Requirement already satisfied: notebook-shim<0.3,>=0.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.2.4)\r\n", "Requirement already satisfied: tornado>=6.2.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (6.4.1)\r\n", "Requirement already satisfied: anyio>=3.1.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.4.0)\r\n", "Requirement already satisfied: argon2-cffi>=21.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (23.1.0)\r\n", "Requirement already satisfied: jupyter-client>=7.4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (8.6.2)\r\n", "Requirement already satisfied: jupyter-core!=5.0.*,>=4.12 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (5.7.2)\r\n", "Requirement already satisfied: jupyter-events>=0.9.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.10.0)\r\n", "Requirement already satisfied: jupyter-server-terminals>=0.4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.5.3)\r\n", "Requirement already satisfied: nbconvert>=6.4.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (7.16.4)\r\n", "Requirement already satisfied: nbformat>=5.3.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (5.10.4)\r\n", "Requirement already satisfied: prometheus-client>=0.9 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.20.0)\r\n", "Requirement already satisfied: pyzmq>=24 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (26.0.3)\r\n", "Requirement already satisfied: send2trash>=1.8.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.8.3)\r\n", "Requirement already satisfied: terminado>=0.8.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.18.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: async-lru>=1.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.0.4)\r\n", "Requirement already satisfied: httpx>=0.25.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.27.0)\r\n", "Requirement already satisfied: ipykernel>=6.5.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (6.29.5)\r\n", "Requirement already satisfied: jupyter-lsp>=2.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.2.5)\r\n", "Requirement already satisfied: tomli>=1.2.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.0.1)\r\n", "Requirement already satisfied: babel>=2.10 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab-server<3,>=2.27.1->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.15.0)\r\n", "Requirement already satisfied: json5>=0.9.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyterlab-server<3,>=2.27.1->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.9.25)\r\n", "Requirement already satisfied: sniffio>=1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from anyio>=3.1.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.3.1)\r\n", "Requirement already satisfied: exceptiongroup>=1.0.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from anyio>=3.1.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.2.2)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: argon2-cffi-bindings in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from argon2-cffi>=21.1->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (21.2.0)\r\n", "Requirement already satisfied: httpcore==1.* in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from httpx>=0.25.0->jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.0.5)\r\n", "Requirement already satisfied: h11<0.15,>=0.13 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from httpcore==1.*->httpx>=0.25.0->jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.14.0)\r\n", "Requirement already satisfied: debugpy>=1.6.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipykernel>=6.5.0->jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.8.2)\r\n", "Requirement already satisfied: nest-asyncio in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from ipykernel>=6.5.0->jupyterlab<4.3,>=4.2.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.6.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: platformdirs>=2.5 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-core!=5.0.*,>=4.12->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.2.2)\r\n", "Requirement already satisfied: python-json-logger>=2.0.4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.0.7)\r\n", "Requirement already satisfied: rfc3339-validator in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.1.4)\r\n", "Requirement already satisfied: rfc3986-validator>=0.1.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.1.1)\r\n", "Requirement already satisfied: beautifulsoup4 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (4.12.3)\r\n", "Requirement already satisfied: bleach!=5.0.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (6.1.0)\r\n", "Requirement already satisfied: defusedxml in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.7.1)\r\n", "Requirement already satisfied: jupyterlab-pygments in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.3.0)\r\n", "Requirement already satisfied: mistune<4,>=2.0.3 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (3.0.2)\r\n", "Requirement already satisfied: nbclient>=0.5.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.10.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: pandocfilters>=1.4.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.5.1)\r\n", "Requirement already satisfied: tinycss2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.3.0)\r\n", "Requirement already satisfied: fastjsonschema>=2.15 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from nbformat>=5.3.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.20.0)\r\n", "Requirement already satisfied: webencodings in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from bleach!=5.0.0->nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (0.5.1)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: fqdn in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.5.1)\r\n", "Requirement already satisfied: isoduration in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (20.11.0)\r\n", "Requirement already satisfied: jsonpointer>1.13 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (3.0.0)\r\n", "Requirement already satisfied: uri-template in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.3.0)\r\n", "Requirement already satisfied: webcolors>=24.6.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (24.6.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: cffi>=1.0.1 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from argon2-cffi-bindings->argon2-cffi>=21.1->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.16.0)\r\n", "Requirement already satisfied: soupsieve>1.2 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from beautifulsoup4->nbconvert>=6.4.4->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.5)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: pycparser in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from cffi>=1.0.1->argon2-cffi-bindings->argon2-cffi>=21.1->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.22)\r\n", "Requirement already satisfied: arrow>=0.15.0 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from isoduration->jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (1.3.0)\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: types-python-dateutil>=2.8.10 in /tmpfs/src/tf_docs_env/lib/python3.9/site-packages (from arrow>=0.15.0->isoduration->jsonschema[format-nongpl]>=4.18.0->jupyter-events>=0.9.0->jupyter-server<3,>=2.4.0->notebook>=4.4.1->widgetsnbextension~=3.6.8->ipywidgets<8,>=7->tensorflow-model-analysis<0.47.0,>=0.46.0->tfx<1.16) (2.9.0.20240316)\r\n" ] } ], "source": [ "# TFX has a constraint of 1.16 due to the removal of tf.estimator support.\n", "!pip install \"tfx<1.16\"" ] }, { "cell_type": "markdown", "metadata": { "id": "szPQ2MDYPZ5j" }, "source": [ "## Did you restart the runtime?\n", "\n", "If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages." ] }, { "cell_type": "markdown", "metadata": { "id": "N-ePgV0Lj68Q" }, "source": [ "### Import packages\n", "We import necessary packages, including standard TFX component classes." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:44.605413Z", "iopub.status.busy": "2024-08-02T09:18:44.604715Z", "iopub.status.idle": "2024-08-02T09:18:50.489387Z", "shell.execute_reply": "2024-08-02T09:18:50.488558Z" }, "id": "YIqpWK9efviJ" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2024-08-02 09:18:45.037858: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n", "2024-08-02 09:18:45.037906: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n", "2024-08-02 09:18:45.039616: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n" ] } ], "source": [ "import os\n", "import pprint\n", "import tempfile\n", "import urllib\n", "\n", "import absl\n", "import tensorflow as tf\n", "import tensorflow_model_analysis as tfma\n", "tf.get_logger().propagate = False\n", "pp = pprint.PrettyPrinter()\n", "\n", "from tfx import v1 as tfx\n", "from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\n", "\n", "%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip" ] }, { "cell_type": "markdown", "metadata": { "id": "wCZTHRy0N1D6" }, "source": [ "Let's check the library versions." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.493860Z", "iopub.status.busy": "2024-08-02T09:18:50.493444Z", "iopub.status.idle": "2024-08-02T09:18:50.497351Z", "shell.execute_reply": "2024-08-02T09:18:50.496768Z" }, "id": "eZ4K18_DN2D8" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TensorFlow version: 2.15.1\n", "TFX version: 1.15.1\n" ] } ], "source": [ "print('TensorFlow version: {}'.format(tf.__version__))\n", "print('TFX version: {}'.format(tfx.__version__))" ] }, { "cell_type": "markdown", "metadata": { "id": "ufJKQ6OvkJlY" }, "source": [ "### Set up pipeline paths" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.500616Z", "iopub.status.busy": "2024-08-02T09:18:50.500377Z", "iopub.status.idle": "2024-08-02T09:18:50.504503Z", "shell.execute_reply": "2024-08-02T09:18:50.503840Z" }, "id": "ad5JLpKbf6sN" }, "outputs": [], "source": [ "# This is the root directory for your TFX pip package installation.\n", "_tfx_root = tfx.__path__[0]\n", "\n", "# This is the directory containing the TFX Chicago Taxi Pipeline example.\n", "_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')\n", "\n", "# This is the path where your model will be pushed for serving.\n", "_serving_model_dir = os.path.join(\n", " tempfile.mkdtemp(), 'serving_model/taxi_simple')\n", "\n", "# Set up logging.\n", "absl.logging.set_verbosity(absl.logging.INFO)" ] }, { "cell_type": "markdown", "metadata": { "id": "n2cMMAbSkGfX" }, "source": [ "### Download example data\n", "We download the example dataset for use in our TFX pipeline.\n", "\n", "The dataset we're using is the [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew) released by the City of Chicago. The columns in this dataset are:\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
pickup_community_areafaretrip_start_month
trip_start_hourtrip_start_daytrip_start_timestamp
pickup_latitudepickup_longitudedropoff_latitude
dropoff_longitudetrip_milespickup_census_tract
dropoff_census_tractpayment_typecompany
trip_secondsdropoff_community_areatips
\n", "\n", "With this dataset, we will build a model that predicts the `tips` of a trip." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.507423Z", "iopub.status.busy": "2024-08-02T09:18:50.507182Z", "iopub.status.idle": "2024-08-02T09:18:50.760209Z", "shell.execute_reply": "2024-08-02T09:18:50.759496Z" }, "id": "BywX6OUEhAqn" }, "outputs": [ { "data": { "text/plain": [ "('/tmpfs/tmp/tfx-dataf8kc6jl6/data.csv',\n", " )" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "_data_root = tempfile.mkdtemp(prefix='tfx-data')\n", "DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'\n", "_data_filepath = os.path.join(_data_root, \"data.csv\")\n", "urllib.request.urlretrieve(DATA_PATH, _data_filepath)" ] }, { "cell_type": "markdown", "metadata": { "id": "blZC1sIQOWfH" }, "source": [ "Take a quick look at the CSV file." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.763950Z", "iopub.status.busy": "2024-08-02T09:18:50.763265Z", "iopub.status.idle": "2024-08-02T09:18:50.898815Z", "shell.execute_reply": "2024-08-02T09:18:50.898067Z" }, "id": "c5YPeLPFOXaD" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "pickup_community_area,fare,trip_start_month,trip_start_hour,trip_start_day,trip_start_timestamp,pickup_latitude,pickup_longitude,dropoff_latitude,dropoff_longitude,trip_miles,pickup_census_tract,dropoff_census_tract,payment_type,company,trip_seconds,dropoff_community_area,tips\r\n", ",12.45,5,19,6,1400269500,,,,,0.0,,,Credit Card,Chicago Elite Cab Corp. (Chicago Carriag,0,,0.0\r\n", ",0,3,19,5,1362683700,,,,,0,,,Unknown,Chicago Elite Cab Corp.,300,,0\r\n", "60,27.05,10,2,3,1380593700,41.836150155,-87.648787952,,,12.6,,,Cash,Taxi Affiliation Services,1380,,0.0\r\n", "10,5.85,10,1,2,1382319000,41.985015101,-87.804532006,,,0.0,,,Cash,Taxi Affiliation Services,180,,0.0\r\n", "14,16.65,5,7,5,1369897200,41.968069,-87.721559063,,,0.0,,,Cash,Dispatch Taxi Affiliation,1080,,0.0\r\n", "13,16.45,11,12,3,1446554700,41.983636307,-87.723583185,,,6.9,,,Cash,,780,,0.0\r\n", "16,32.05,12,1,1,1417916700,41.953582125,-87.72345239,,,15.4,,,Cash,,1200,,0.0\r\n", "30,38.45,10,10,5,1444301100,41.839086906,-87.714003807,,,14.6,,,Cash,,2580,,0.0\r\n", "11,14.65,1,1,3,1358213400,41.978829526,-87.771166703,,,5.81,,,Cash,,1080,,0.0\r\n" ] } ], "source": [ "!head {_data_filepath}" ] }, { "cell_type": "markdown", "metadata": { "id": "QioyhunCImwE" }, "source": [ "*Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.*" ] }, { "cell_type": "markdown", "metadata": { "id": "8ONIE_hdkPS4" }, "source": [ "### Create the InteractiveContext\n", "Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.902581Z", "iopub.status.busy": "2024-08-02T09:18:50.902313Z", "iopub.status.idle": "2024-08-02T09:18:50.908327Z", "shell.execute_reply": "2024-08-02T09:18:50.907725Z" }, "id": "0Rh6K5sUf9dd" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy as root for pipeline outputs.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/metadata.sqlite.\n" ] } ], "source": [ "# Here, we create an InteractiveContext using default parameters. This will\n", "# use a temporary directory with an ephemeral ML Metadata database instance.\n", "# To use your own pipeline root or database, the optional properties\n", "# `pipeline_root` and `metadata_connection_config` may be passed to\n", "# InteractiveContext. Calls to InteractiveContext are no-ops outside of the\n", "# notebook.\n", "context = InteractiveContext()" ] }, { "cell_type": "markdown", "metadata": { "id": "HdQWxfsVkzdJ" }, "source": [ "## Run TFX components interactively\n", "In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts." ] }, { "cell_type": "markdown", "metadata": { "id": "L9fwt9gQk3BR" }, "source": [ "### ExampleGen\n", "\n", "The `ExampleGen` component is usually at the start of a TFX pipeline. It will:\n", "\n", "1. Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)\n", "2. Convert data into the `tf.Example` format (learn more [here](https://www.tensorflow.org/tutorials/load_data/tfrecord))\n", "3. Copy data into the `_tfx_root` directory for other components to access\n", "\n", "`ExampleGen` takes as input the path to your data source. In our case, this is the `_data_root` path that contains the downloaded CSV.\n", "\n", "Note: In this notebook, we can instantiate components one-by-one and run them with `InteractiveContext.run()`. By contrast, in a production setting, we would specify all the components upfront in a `Pipeline` to pass to the orchestrator (see the [Building a TFX Pipeline Guide](https://www.tensorflow.org/tfx/guide/build_tfx_pipeline))." ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:50.912026Z", "iopub.status.busy": "2024-08-02T09:18:50.911455Z", "iopub.status.idle": "2024-08-02T09:18:57.023438Z", "shell.execute_reply": "2024-08-02T09:18:57.022742Z" }, "id": "PyXjuMt8f-9u" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for CsvExampleGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:select span and version = (0, None)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:latest span and version = (0, None)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for CsvExampleGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Generating examples.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.\n" ] }, { "data": { "application/javascript": [ "\n", " if (typeof window.interactive_beam_jquery == 'undefined') {\n", " var jqueryScript = document.createElement('script');\n", " jqueryScript.src = 'https://code.jquery.com/jquery-3.4.1.slim.min.js';\n", " jqueryScript.type = 'text/javascript';\n", " jqueryScript.onload = function() {\n", " var datatableScript = document.createElement('script');\n", " datatableScript.src = 'https://cdn.datatables.net/1.10.20/js/jquery.dataTables.min.js';\n", " datatableScript.type = 'text/javascript';\n", " datatableScript.onload = function() {\n", " window.interactive_beam_jquery = jQuery.noConflict(true);\n", " window.interactive_beam_jquery(document).ready(function($){\n", " \n", " });\n", " }\n", " document.head.appendChild(datatableScript);\n", " };\n", " document.head.appendChild(jqueryScript);\n", " } else {\n", " window.interactive_beam_jquery(document).ready(function($){\n", " \n", " });\n", " }" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Processing input csv data /tmpfs/tmp/tfx-dataf8kc6jl6/* to TFExample.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Examples generated.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for CsvExampleGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe8807f6b50
.execution_id1
.component\n", "\n", "
CsvExampleGen at 0x7fe8880779d0
.inputs{}
.outputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
.exec_properties
['input_base']/tmpfs/tmp/tfx-dataf8kc6jl6
['input_config']{\n", " "splits": [\n", " {\n", " "name": "single_split",\n", " "pattern": "*"\n", " }\n", " ]\n", "}
['output_config']{\n", " "split_config": {\n", " "splits": [\n", " {\n", " "hash_buckets": 2,\n", " "name": "train"\n", " },\n", " {\n", " "hash_buckets": 1,\n", " "name": "eval"\n", " }\n", " ]\n", " }\n", "}
['output_data_format']6
['output_file_format']5
['custom_config']None
['range_config']None
['span']0
['version']None
['input_fingerprint']split:single_split,num_files:1,total_bytes:1922812,xor_checksum:1722590330,sum_checksum:1722590330
.component.inputs{}
.component.outputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
" ], "text/plain": [ "ExecutionResult(\n", " component_id: CsvExampleGen\n", " execution_id: 1\n", " outputs:\n", " examples: OutputChannel(artifact_type=Examples, producer_component_id=CsvExampleGen, output_key=examples, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_gen = tfx.components.CsvExampleGen(input_base=_data_root)\n", "context.run(example_gen)" ] }, { "cell_type": "markdown", "metadata": { "id": "OqCoZh7KPUm9" }, "source": [ "Let's examine the output artifacts of `ExampleGen`. This component produces two artifacts, training examples and evaluation examples:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:57.027245Z", "iopub.status.busy": "2024-08-02T09:18:57.026764Z", "iopub.status.idle": "2024-08-02T09:18:57.030920Z", "shell.execute_reply": "2024-08-02T09:18:57.030263Z" }, "id": "880KkTAkPeUg" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[\"train\", \"eval\"] /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1\n" ] } ], "source": [ "artifact = example_gen.outputs['examples'].get()[0]\n", "print(artifact.split_names, artifact.uri)" ] }, { "cell_type": "markdown", "metadata": { "id": "J6vcbW_wPqvl" }, "source": [ "We can also take a look at the first three training examples:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:57.034380Z", "iopub.status.busy": "2024-08-02T09:18:57.034119Z", "iopub.status.idle": "2024-08-02T09:18:59.312645Z", "shell.execute_reply": "2024-08-02T09:18:59.311905Z" }, "id": "H4XIXjiCPwzQ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " bytes_list {\n", " value: \"Chicago Elite Cab Corp. (Chicago Carriag\"\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 12.449999809265137\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " bytes_list {\n", " value: \"Credit Card\"\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " float_list {\n", " value: 0.0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: 0.0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 6\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 19\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 5\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_timestamp\"\n", " value {\n", " int64_list {\n", " value: 1400269500\n", " }\n", " }\n", " }\n", "}\n", "\n", "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " bytes_list {\n", " value: \"Taxi Affiliation Services\"\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 27.049999237060547\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " bytes_list {\n", " value: \"Cash\"\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " value: 60\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " float_list {\n", " value: 41.836151123046875\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " float_list {\n", " value: -87.64878845214844\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " float_list {\n", " value: 0.0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: 12.600000381469727\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " int64_list {\n", " value: 1380\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 3\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 2\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 10\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_timestamp\"\n", " value {\n", " int64_list {\n", " value: 1380593700\n", " }\n", " }\n", " }\n", "}\n", "\n", "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " bytes_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " float_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 16.450000762939453\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " bytes_list {\n", " value: \"Cash\"\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " value: 13\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " float_list {\n", " value: 41.98363494873047\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " float_list {\n", " value: -87.72357940673828\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " float_list {\n", " value: 0.0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: 6.900000095367432\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " int64_list {\n", " value: 780\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 3\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 12\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 11\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_timestamp\"\n", " value {\n", " int64_list {\n", " value: 1446554700\n", " }\n", " }\n", " }\n", "}\n", "\n" ] } ], "source": [ "# Get the URI of the output artifact representing the training examples, which is a directory\n", "train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')\n", "\n", "# Get the list of files in this directory (all compressed TFRecord files)\n", "tfrecord_filenames = [os.path.join(train_uri, name)\n", " for name in os.listdir(train_uri)]\n", "\n", "# Create a `TFRecordDataset` to read these files\n", "dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n", "\n", "# Iterate over the first 3 records and decode them.\n", "for tfrecord in dataset.take(3):\n", " serialized_example = tfrecord.numpy()\n", " example = tf.train.Example()\n", " example.ParseFromString(serialized_example)\n", " pp.pprint(example)" ] }, { "cell_type": "markdown", "metadata": { "id": "2gluYjccf-IP" }, "source": [ "Now that `ExampleGen` has finished ingesting the data, the next step is data analysis." ] }, { "cell_type": "markdown", "metadata": { "id": "csM6BFhtk5Aa" }, "source": [ "### StatisticsGen\n", "The `StatisticsGen` component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n", "\n", "`StatisticsGen` takes as input the dataset we just ingested using `ExampleGen`." ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:18:59.316571Z", "iopub.status.busy": "2024-08-02T09:18:59.315883Z", "iopub.status.idle": "2024-08-02T09:19:02.882300Z", "shell.execute_reply": "2024-08-02T09:19:02.881669Z" }, "id": "MAscCCYWgA-9" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Excluding no splits because exclude_splits is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for StatisticsGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for StatisticsGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Generating statistics for split train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Statistics for split train written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2/Split-train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Generating statistics for split eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Statistics for split eval written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2/Split-eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for StatisticsGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe7381a3040
.execution_id2
.component\n", "\n", "
StatisticsGen at 0x7fe85fada580
.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
.outputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
.exec_properties
['stats_options_json']None
['exclude_splits'][]
.component.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
.component.outputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
" ], "text/plain": [ "ExecutionResult(\n", " component_id: StatisticsGen\n", " execution_id: 2\n", " outputs:\n", " statistics: OutputChannel(artifact_type=ExampleStatistics, producer_component_id=StatisticsGen, output_key=statistics, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples'])\n", "context.run(statistics_gen)" ] }, { "cell_type": "markdown", "metadata": { "id": "HLI6cb_5WugZ" }, "source": [ "After `StatisticsGen` finishes running, we can visualize the outputted statistics. Try playing with the different plots!" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:02.885632Z", "iopub.status.busy": "2024-08-02T09:19:02.885344Z", "iopub.status.idle": "2024-08-02T09:19:02.899485Z", "shell.execute_reply": "2024-08-02T09:19:02.898714Z" }, "id": "tLjXy7K6Tp_G" }, "outputs": [ { "data": { "text/html": [ "Artifact at /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
'train' split:

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
'eval' split:

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " " ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "context.show(statistics_gen.outputs['statistics'])" ] }, { "cell_type": "markdown", "metadata": { "id": "HLKLTO9Nk60p" }, "source": [ "### SchemaGen\n", "\n", "The `SchemaGen` component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n", "\n", "`SchemaGen` will take as input the statistics that we generated with `StatisticsGen`, looking at the training split by default." ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:02.902556Z", "iopub.status.busy": "2024-08-02T09:19:02.902312Z", "iopub.status.idle": "2024-08-02T09:19:02.945201Z", "shell.execute_reply": "2024-08-02T09:19:02.944648Z" }, "id": "ygQvZ6hsiQ_J" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Excluding no splits because exclude_splits is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for SchemaGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for SchemaGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Processing schema from statistics for split train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Processing schema from statistics for split eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Schema written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3/schema.pbtxt.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for SchemaGen\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe7381468b0
.execution_id3
.component\n", "\n", "
SchemaGen at 0x7fe85fadaf40
.inputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
.outputs
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.exec_properties
['infer_feature_shape']0
['exclude_splits'][]
.component.inputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
.component.outputs
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
" ], "text/plain": [ "ExecutionResult(\n", " component_id: SchemaGen\n", " execution_id: 3\n", " outputs:\n", " schema: OutputChannel(artifact_type=Schema, producer_component_id=SchemaGen, output_key=schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "schema_gen = tfx.components.SchemaGen(\n", " statistics=statistics_gen.outputs['statistics'],\n", " infer_feature_shape=False)\n", "context.run(schema_gen)" ] }, { "cell_type": "markdown", "metadata": { "id": "zi6TxTUKXM6b" }, "source": [ "After `SchemaGen` finishes running, we can visualize the generated schema as a table." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:02.948374Z", "iopub.status.busy": "2024-08-02T09:19:02.947909Z", "iopub.status.idle": "2024-08-02T09:19:02.968756Z", "shell.execute_reply": "2024-08-02T09:19:02.968157Z" }, "id": "Ec9vqDXpXeMb" }, "outputs": [ { "data": { "text/html": [ "Artifact at /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
TypePresenceValencyDomain
Feature name
'company'STRINGrequired'company'
'dropoff_census_tract'INTrequired-
'dropoff_community_area'INTrequired-
'dropoff_latitude'FLOATrequired-
'dropoff_longitude'FLOATrequired-
'fare'FLOATrequiredsingle-
'payment_type'STRINGrequiredsingle'payment_type'
'pickup_census_tract'INTrequired-
'pickup_community_area'INTrequired-
'pickup_latitude'FLOATrequired-
'pickup_longitude'FLOATrequired-
'tips'FLOATrequiredsingle-
'trip_miles'FLOATrequiredsingle-
'trip_seconds'INTrequired-
'trip_start_day'INTrequiredsingle-
'trip_start_hour'INTrequiredsingle-
'trip_start_month'INTrequiredsingle-
'trip_start_timestamp'INTrequiredsingle-
\n", "
" ], "text/plain": [ " Type Presence Valency Domain\n", "Feature name \n", "'company' STRING required 'company'\n", "'dropoff_census_tract' INT required -\n", "'dropoff_community_area' INT required -\n", "'dropoff_latitude' FLOAT required -\n", "'dropoff_longitude' FLOAT required -\n", "'fare' FLOAT required single -\n", "'payment_type' STRING required single 'payment_type'\n", "'pickup_census_tract' INT required -\n", "'pickup_community_area' INT required -\n", "'pickup_latitude' FLOAT required -\n", "'pickup_longitude' FLOAT required -\n", "'tips' FLOAT required single -\n", "'trip_miles' FLOAT required single -\n", "'trip_seconds' INT required -\n", "'trip_start_day' INT required single -\n", "'trip_start_hour' INT required single -\n", "'trip_start_month' INT required single -\n", "'trip_start_timestamp' INT required single -" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Values
Domain
'company''0118 - 42111 Godfrey S.Awir', '1085 - 72312 N and W Cab Co', '2192 - 73487 Zeymane Corp', '2733 - 74600 Benny Jona', '3011 - 66308 JBL Cab Inc.', '3152 - 97284 Crystal Abernathy', '3201 - C&D Cab Co Inc', '3201 - CID Cab Co Inc', '3253 - 91138 Gaither Cab Co.', '3319 - CD Cab Co', '3385 - 23210 Eman Cab', '3385 - Eman Cab', '3623 - 72222 Arrington Enterprises', '3897 - 57856 Ilie Malec', '4053 - 40193 Adwar H. Nikola', '4197 - 41842 Royal Star', '4197 - Royal Star', '4615 - 83503 Tyrone Henderson', '4615 - Tyrone Henderson', '4623 - Jay Kim', '5006 - 39261 Salifu Bawa', '5074 - 54002 Ahzmi Inc', '5074 - Ahzmi Inc', '5129 - 87128', '5129 - 98755 Mengisti Taxi', '585 - 88805 Valley Cab Co', '5864 - Thomas Owusu', '5874 - 73628 Sergey Cab Corp.', '5874 - Sergey Cab Corp.', '5997 - 65283 AW Services Inc.', '6488 - 83287 Zuha Taxi', '6574 - Babylon Express Inc.', '6742 - 83735 Tasha ride inc', 'Blue Ribbon Taxi Association Inc.', 'C & D Cab Co Inc', 'Chicago Elite Cab Corp.', 'Chicago Elite Cab Corp. (Chicago Carriag', 'Chicago Medallion Leasing INC', 'Chicago Medallion Management', 'Choice Taxi Association', 'Dispatch Taxi Affiliation', 'KOAM Taxi Association', 'Northwest Management LLC', 'Taxi Affiliation Services', 'Top Cab Affiliation', '0694 - 59280 Chinesco Trans Inc', '2092 - 61288 Sbeih company', '2192 - Zeymane Corp', '2809 - 95474 C & D Cab Co Inc.', '2823 - 73307 Seung Lee', '3094 - 24059 G.L.B. Cab Co', '3897 - Ilie Malec', '4053 - Adwar H. Nikola', '5006 - Salifu Bawa', '5129 - Mengisti Taxi', '5724 - KYVI Cab Inc', '585 - Valley Cab Co', '5864 - 73614 Thomas Owusu', '5997 - AW Services Inc.', '6057 - 24657 Richard Addo', '6743 - Luhak Corp'
'payment_type''Cash', 'Credit Card', 'Dispute', 'No Charge', 'Pcard', 'Unknown', 'Prcard'
\n", "
" ], "text/plain": [ " Values\n", "Domain \n", "'company' '0118 - 42111 Godfrey S.Awir', '1085 - 72312 N and W Cab Co', '2192 - 73487 Zeymane Corp', '2733 - 74600 Benny Jona', '3011 - 66308 JBL Cab Inc.', '3152 - 97284 Crystal Abernathy', '3201 - C&D Cab Co Inc', '3201 - CID Cab Co Inc', '3253 - 91138 Gaither Cab Co.', '3319 - CD Cab Co', '3385 - 23210 Eman Cab', '3385 - Eman Cab', '3623 - 72222 Arrington Enterprises', '3897 - 57856 Ilie Malec', '4053 - 40193 Adwar H. Nikola', '4197 - 41842 Royal Star', '4197 - Royal Star', '4615 - 83503 Tyrone Henderson', '4615 - Tyrone Henderson', '4623 - Jay Kim', '5006 - 39261 Salifu Bawa', '5074 - 54002 Ahzmi Inc', '5074 - Ahzmi Inc', '5129 - 87128', '5129 - 98755 Mengisti Taxi', '585 - 88805 Valley Cab Co', '5864 - Thomas Owusu', '5874 - 73628 Sergey Cab Corp.', '5874 - Sergey Cab Corp.', '5997 - 65283 AW Services Inc.', '6488 - 83287 Zuha Taxi', '6574 - Babylon Express Inc.', '6742 - 83735 Tasha ride inc', 'Blue Ribbon Taxi Association Inc.', 'C & D Cab Co Inc', 'Chicago Elite Cab Corp.', 'Chicago Elite Cab Corp. (Chicago Carriag', 'Chicago Medallion Leasing INC', 'Chicago Medallion Management', 'Choice Taxi Association', 'Dispatch Taxi Affiliation', 'KOAM Taxi Association', 'Northwest Management LLC', 'Taxi Affiliation Services', 'Top Cab Affiliation', '0694 - 59280 Chinesco Trans Inc', '2092 - 61288 Sbeih company', '2192 - Zeymane Corp', '2809 - 95474 C & D Cab Co Inc.', '2823 - 73307 Seung Lee', '3094 - 24059 G.L.B. Cab Co', '3897 - Ilie Malec', '4053 - Adwar H. Nikola', '5006 - Salifu Bawa', '5129 - Mengisti Taxi', '5724 - KYVI Cab Inc', '585 - Valley Cab Co', '5864 - 73614 Thomas Owusu', '5997 - AW Services Inc.', '6057 - 24657 Richard Addo', '6743 - Luhak Corp'\n", "'payment_type' 'Cash', 'Credit Card', 'Dispute', 'No Charge', 'Pcard', 'Unknown', 'Prcard'" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "context.show(schema_gen.outputs['schema'])" ] }, { "cell_type": "markdown", "metadata": { "id": "kZWWdbA-m7zp" }, "source": [ "Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.\n", "\n", "To learn more about schemas, see [the SchemaGen documentation](https://www.tensorflow.org/tfx/guide/schemagen)." ] }, { "cell_type": "markdown", "metadata": { "id": "V1qcUuO9k9f8" }, "source": [ "### ExampleValidator\n", "The `ExampleValidator` component detects anomalies in your data, based on the expectations defined by the schema. It also uses the [TensorFlow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started) library.\n", "\n", "`ExampleValidator` will take as input the statistics from `StatisticsGen`, and the schema from `SchemaGen`." ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:02.972228Z", "iopub.status.busy": "2024-08-02T09:19:02.971754Z", "iopub.status.idle": "2024-08-02T09:19:03.020195Z", "shell.execute_reply": "2024-08-02T09:19:03.019551Z" }, "id": "XRlRUuGgiXks" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Excluding no splits because exclude_splits is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for ExampleValidator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for ExampleValidator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Validating schema against the computed statistics for split train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Anomalies alerts created for split train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Validation complete for split train. Anomalies written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4/Split-train.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Validating schema against the computed statistics for split eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Anomalies alerts created for split eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Validation complete for split eval. Anomalies written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4/Split-eval.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for ExampleValidator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe85fad5e20
.execution_id4
.component\n", "\n", "
ExampleValidator at 0x7fe85c14d070
.inputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.outputs
['anomalies']\n", "\n", "
Channel of type 'ExampleAnomalies' (1 artifact) at 0x7fe85fadac40
.type_nameExampleAnomalies
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleAnomalies' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4) at 0x7fe85c0cc820
.type<class 'tfx.types.standard_artifacts.ExampleAnomalies'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4
.span0
.split_names["train", "eval"]
.exec_properties
['exclude_splits'][]
['custom_validation_config']None
.component.inputs
['statistics']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fada9a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2) at 0x7fe85fadaca0
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/StatisticsGen/statistics/2
.span0
.split_names["train", "eval"]
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.component.outputs
['anomalies']\n", "\n", "
Channel of type 'ExampleAnomalies' (1 artifact) at 0x7fe85fadac40
.type_nameExampleAnomalies
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleAnomalies' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4) at 0x7fe85c0cc820
.type<class 'tfx.types.standard_artifacts.ExampleAnomalies'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4
.span0
.split_names["train", "eval"]
" ], "text/plain": [ "ExecutionResult(\n", " component_id: ExampleValidator\n", " execution_id: 4\n", " outputs:\n", " anomalies: OutputChannel(artifact_type=ExampleAnomalies, producer_component_id=ExampleValidator, output_key=anomalies, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_validator = tfx.components.ExampleValidator(\n", " statistics=statistics_gen.outputs['statistics'],\n", " schema=schema_gen.outputs['schema'])\n", "context.run(example_validator)" ] }, { "cell_type": "markdown", "metadata": { "id": "855mrHgJcoer" }, "source": [ "After `ExampleValidator` finishes running, we can visualize the anomalies as a table." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.023692Z", "iopub.status.busy": "2024-08-02T09:19:03.023009Z", "iopub.status.idle": "2024-08-02T09:19:03.036638Z", "shell.execute_reply": "2024-08-02T09:19:03.036019Z" }, "id": "TDyAAozQcrk3" }, "outputs": [ { "data": { "text/html": [ "Artifact at /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/ExampleValidator/anomalies/4

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
'train' split:

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

No anomalies found.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
'eval' split:

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "

No anomalies found.

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "context.show(example_validator.outputs['anomalies'])" ] }, { "cell_type": "markdown", "metadata": { "id": "znMoJj60ybZx" }, "source": [ "In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors." ] }, { "cell_type": "markdown", "metadata": { "id": "JPViEz5RlA36" }, "source": [ "### Transform\n", "The `Transform` component performs feature engineering for both training and serving. It uses the [TensorFlow Transform](https://www.tensorflow.org/tfx/transform/get_started) library.\n", "\n", "`Transform` will take as input the data from `ExampleGen`, the schema from `SchemaGen`, as well as a module that contains user-defined Transform code.\n", "\n", "Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, [see the tutorial](https://www.tensorflow.org/tfx/tutorials/transform/simple)). First, we define a few constants for feature engineering:\n", "\n", "Note: The `%%writefile` cell magic will save the contents of the cell as a `.py` file on disk. This allows the `Transform` component to load your code as a module.\n", "\n" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.040241Z", "iopub.status.busy": "2024-08-02T09:19:03.039584Z", "iopub.status.idle": "2024-08-02T09:19:03.042776Z", "shell.execute_reply": "2024-08-02T09:19:03.042198Z" }, "id": "PuNSiUKb4YJf" }, "outputs": [], "source": [ "_taxi_constants_module_file = 'taxi_constants.py'" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.046171Z", "iopub.status.busy": "2024-08-02T09:19:03.045575Z", "iopub.status.idle": "2024-08-02T09:19:03.050375Z", "shell.execute_reply": "2024-08-02T09:19:03.049787Z" }, "id": "HPjhXuIF4YJh" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Writing taxi_constants.py\n" ] } ], "source": [ "%%writefile {_taxi_constants_module_file}\n", "\n", "# Categorical features are assumed to each have a maximum value in the dataset.\n", "MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]\n", "\n", "CATEGORICAL_FEATURE_KEYS = [\n", " 'trip_start_hour', 'trip_start_day', 'trip_start_month',\n", " 'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',\n", " 'dropoff_community_area'\n", "]\n", "\n", "DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']\n", "\n", "# Number of buckets used by tf.transform for encoding each feature.\n", "FEATURE_BUCKET_COUNT = 10\n", "\n", "BUCKET_FEATURE_KEYS = [\n", " 'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',\n", " 'dropoff_longitude'\n", "]\n", "\n", "# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform\n", "VOCAB_SIZE = 1000\n", "\n", "# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.\n", "OOV_SIZE = 10\n", "\n", "VOCAB_FEATURE_KEYS = [\n", " 'payment_type',\n", " 'company',\n", "]\n", "\n", "# Keys\n", "LABEL_KEY = 'tips'\n", "FARE_KEY = 'fare'" ] }, { "cell_type": "markdown", "metadata": { "id": "Duj2Ax5z4YJl" }, "source": [ "Next, we write a `preprocessing_fn` that takes in raw data as input, and returns transformed features that our model can train on:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.053677Z", "iopub.status.busy": "2024-08-02T09:19:03.053092Z", "iopub.status.idle": "2024-08-02T09:19:03.056039Z", "shell.execute_reply": "2024-08-02T09:19:03.055471Z" }, "id": "4AJ9hBs94YJm" }, "outputs": [], "source": [ "_taxi_transform_module_file = 'taxi_transform.py'" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.059145Z", "iopub.status.busy": "2024-08-02T09:19:03.058626Z", "iopub.status.idle": "2024-08-02T09:19:03.063866Z", "shell.execute_reply": "2024-08-02T09:19:03.063241Z" }, "id": "MYmxxx9A4YJn" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Writing taxi_transform.py\n" ] } ], "source": [ "%%writefile {_taxi_transform_module_file}\n", "\n", "import tensorflow as tf\n", "import tensorflow_transform as tft\n", "\n", "import taxi_constants\n", "\n", "_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n", "_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n", "_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n", "_OOV_SIZE = taxi_constants.OOV_SIZE\n", "_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n", "_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n", "_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n", "_FARE_KEY = taxi_constants.FARE_KEY\n", "_LABEL_KEY = taxi_constants.LABEL_KEY\n", "\n", "\n", "def preprocessing_fn(inputs):\n", " \"\"\"tf.transform's callback function for preprocessing inputs.\n", " Args:\n", " inputs: map from feature keys to raw not-yet-transformed features.\n", " Returns:\n", " Map from string feature key to transformed feature operations.\n", " \"\"\"\n", " outputs = {}\n", " for key in _DENSE_FLOAT_FEATURE_KEYS:\n", " # If sparse make it dense, setting nan's to 0 or '', and apply zscore.\n", " outputs[key] = tft.scale_to_z_score(\n", " _fill_in_missing(inputs[key]))\n", "\n", " for key in _VOCAB_FEATURE_KEYS:\n", " # Build a vocabulary for this feature.\n", " outputs[key] = tft.compute_and_apply_vocabulary(\n", " _fill_in_missing(inputs[key]),\n", " top_k=_VOCAB_SIZE,\n", " num_oov_buckets=_OOV_SIZE)\n", "\n", " for key in _BUCKET_FEATURE_KEYS:\n", " outputs[key] = tft.bucketize(\n", " _fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)\n", "\n", " for key in _CATEGORICAL_FEATURE_KEYS:\n", " outputs[key] = _fill_in_missing(inputs[key])\n", "\n", " # Was this passenger a big tipper?\n", " taxi_fare = _fill_in_missing(inputs[_FARE_KEY])\n", " tips = _fill_in_missing(inputs[_LABEL_KEY])\n", " outputs[_LABEL_KEY] = tf.where(\n", " tf.math.is_nan(taxi_fare),\n", " tf.cast(tf.zeros_like(taxi_fare), tf.int64),\n", " # Test if the tip was > 20% of the fare.\n", " tf.cast(\n", " tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))\n", "\n", " return outputs\n", "\n", "\n", "def _fill_in_missing(x):\n", " \"\"\"Replace missing values in a SparseTensor.\n", " Fills in missing values of `x` with '' or 0, and converts to a dense tensor.\n", " Args:\n", " x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1\n", " in the second dimension.\n", " Returns:\n", " A rank 1 tensor where missing values of `x` have been filled in.\n", " \"\"\"\n", " if not isinstance(x, tf.sparse.SparseTensor):\n", " return x\n", "\n", " default_value = '' if x.dtype == tf.string else 0\n", " return tf.squeeze(\n", " tf.sparse.to_dense(\n", " tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),\n", " default_value),\n", " axis=1)" ] }, { "cell_type": "markdown", "metadata": { "id": "wgbmZr3sgbWW" }, "source": [ "Now, we pass in this feature engineering code to the `Transform` component and run it to transform your data." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:03.067038Z", "iopub.status.busy": "2024-08-02T09:19:03.066495Z", "iopub.status.idle": "2024-08-02T09:19:29.669532Z", "shell.execute_reply": "2024-08-02T09:19:29.668825Z" }, "id": "jHfhth_GiZI9" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_transform.py' (including modules: ['taxi_transform', 'taxi_constants']).\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:User module package has hash fingerprint version f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmpfs/tmp/tmp3yrnrgjr/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmpfs/tmp/tmp0217gwc_', '--dist-dir', '/tmpfs/tmp/tmp9vevtaee']\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.\n", "!!\n", "\n", " ********************************************************************************\n", " Please avoid running ``setup.py`` directly.\n", " Instead, use pypa/build, pypa/installer or other\n", " standards-based tools.\n", "\n", " See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.\n", " ********************************************************************************\n", "\n", "!!\n", " self.initialize_options()\n", "INFO:absl:Successfully built user code wheel distribution at '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'; target user module is 'taxi_transform'.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Full user module path is 'taxi_transform@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for Transform\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "running bdist_wheel\n", "running build\n", "running build_py\n", "creating build\n", "creating build/lib\n", "copying taxi_transform.py -> build/lib\n", "copying taxi_constants.py -> build/lib\n", "installing to /tmpfs/tmp/tmp0217gwc_\n", "running install\n", "running install_lib\n", "copying build/lib/taxi_transform.py -> /tmpfs/tmp/tmp0217gwc_\n", "copying build/lib/taxi_constants.py -> /tmpfs/tmp/tmp0217gwc_\n", "running install_egg_info\n", "running egg_info\n", "creating tfx_user_code_Transform.egg-info\n", "writing tfx_user_code_Transform.egg-info/PKG-INFO\n", "writing dependency_links to tfx_user_code_Transform.egg-info/dependency_links.txt\n", "writing top-level names to tfx_user_code_Transform.egg-info/top_level.txt\n", "writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'\n", "reading manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'\n", "writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'\n", "Copying tfx_user_code_Transform.egg-info to /tmpfs/tmp/tmp0217gwc_/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3.9.egg-info\n", "running install_scripts\n", "creating /tmpfs/tmp/tmp0217gwc_/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.dist-info/WHEEL\n", "creating '/tmpfs/tmp/tmp9vevtaee/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' and adding '/tmpfs/tmp/tmp0217gwc_' to it\n", "adding 'taxi_constants.py'\n", "adding 'taxi_transform.py'\n", "adding 'tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.dist-info/METADATA'\n", "adding 'tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.dist-info/WHEEL'\n", "adding 'tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.dist-info/top_level.txt'\n", "adding 'tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424.dist-info/RECORD'\n", "removing /tmpfs/tmp/tmp0217gwc_\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for Transform\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Analyze the 'train' split and transform all splits when splits_config is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'preprocessing_fn': None} 'preprocessing_fn'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Installing '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmpfs/tmp/tmp64ci87py', '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Processing /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Successfully installed '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'stats_options_updater_fn': None} 'stats_options_updater_fn'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Installing '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmpfs/tmp/tmp6lu7e3bz', '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: tfx-user-code-Transform\n", "Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Processing /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Successfully installed '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Installing '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmpfs/tmp/tmpia326ow3', '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: tfx-user-code-Transform\n", "Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Processing /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Successfully installed '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: tfx-user-code-Transform\n", "Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5/.temp_path/tftransform_tmp/f6e1c2a9c50f4ae389af51a32aa3d8bf/assets\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Writing fingerprint to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5/.temp_path/tftransform_tmp/f6e1c2a9c50f4ae389af51a32aa3d8bf/fingerprint.pb\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:struct2tensor is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_decision_forests is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_text is not available.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5/.temp_path/tftransform_tmp/ffdcaaffed72449194bac42d258bd914/assets\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Writing fingerprint to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5/.temp_path/tftransform_tmp/ffdcaaffed72449194bac42d258bd914/fingerprint.pb\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:struct2tensor is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_decision_forests is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_text is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:struct2tensor is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_decision_forests is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_text is not available.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for Transform\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe72059dcd0
.execution_id5
.component\n", "\n", "
Transform at 0x7fe85c0ccb80
.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.outputs
['transform_graph']\n", "\n", "
Channel of type 'TransformGraph' (1 artifact) at 0x7fe85fadaac0
.type_nameTransformGraph
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformGraph' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5) at 0x7fe85fae0280
.type<class 'tfx.types.standard_artifacts.TransformGraph'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5
['transformed_examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe85fada5e0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5) at 0x7fe8807f6a00
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5
.span0
.split_names["eval", "train"]
.version0
['updated_analyzer_cache']\n", "\n", "
Channel of type 'TransformCache' (1 artifact) at 0x7fe85fada0a0
.type_nameTransformCache
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformCache' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/updated_analyzer_cache/5) at 0x7fe85fad5400
.type<class 'tfx.types.standard_artifacts.TransformCache'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/updated_analyzer_cache/5
['pre_transform_schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fada880
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_schema/5) at 0x7fe85fae02e0
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_schema/5
['pre_transform_stats']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fc9f0a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_stats/5) at 0x7fe85fae0580
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_stats/5
.span0
.split_names
['post_transform_schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fc74d60
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_schema/5) at 0x7fe85fadab80
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_schema/5
['post_transform_stats']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fae0ca0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_stats/5) at 0x7fe85c13f100
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_stats/5
.span0
.split_names
['post_transform_anomalies']\n", "\n", "
Channel of type 'ExampleAnomalies' (1 artifact) at 0x7fe85fae03a0
.type_nameExampleAnomalies
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleAnomalies' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_anomalies/5) at 0x7fe85c13f2e0
.type<class 'tfx.types.standard_artifacts.ExampleAnomalies'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_anomalies/5
.span0
.split_names
.exec_properties
['module_file']None
['preprocessing_fn']None
['stats_options_updater_fn']None
['force_tf_compat_v1']0
['custom_config']null
['splits_config']None
['disable_statistics']0
['module_path']taxi_transform@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl
.component.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.component.outputs
['transform_graph']\n", "\n", "
Channel of type 'TransformGraph' (1 artifact) at 0x7fe85fadaac0
.type_nameTransformGraph
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformGraph' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5) at 0x7fe85fae0280
.type<class 'tfx.types.standard_artifacts.TransformGraph'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5
['transformed_examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe85fada5e0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5) at 0x7fe8807f6a00
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5
.span0
.split_names["eval", "train"]
.version0
['updated_analyzer_cache']\n", "\n", "
Channel of type 'TransformCache' (1 artifact) at 0x7fe85fada0a0
.type_nameTransformCache
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformCache' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/updated_analyzer_cache/5) at 0x7fe85fad5400
.type<class 'tfx.types.standard_artifacts.TransformCache'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/updated_analyzer_cache/5
['pre_transform_schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fada880
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_schema/5) at 0x7fe85fae02e0
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_schema/5
['pre_transform_stats']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fc9f0a0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_stats/5) at 0x7fe85fae0580
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/pre_transform_stats/5
.span0
.split_names
['post_transform_schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fc74d60
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_schema/5) at 0x7fe85fadab80
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_schema/5
['post_transform_stats']\n", "\n", "
Channel of type 'ExampleStatistics' (1 artifact) at 0x7fe85fae0ca0
.type_nameExampleStatistics
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleStatistics' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_stats/5) at 0x7fe85c13f100
.type<class 'tfx.types.standard_artifacts.ExampleStatistics'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_stats/5
.span0
.split_names
['post_transform_anomalies']\n", "\n", "
Channel of type 'ExampleAnomalies' (1 artifact) at 0x7fe85fae03a0
.type_nameExampleAnomalies
._artifacts
[0]\n", "\n", "
Artifact of type 'ExampleAnomalies' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_anomalies/5) at 0x7fe85c13f2e0
.type<class 'tfx.types.standard_artifacts.ExampleAnomalies'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/post_transform_anomalies/5
.span0
.split_names
" ], "text/plain": [ "ExecutionResult(\n", " component_id: Transform\n", " execution_id: 5\n", " outputs:\n", " transform_graph: OutputChannel(artifact_type=TransformGraph, producer_component_id=Transform, output_key=transform_graph, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " transformed_examples: OutputChannel(artifact_type=Examples, producer_component_id=Transform, output_key=transformed_examples, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " updated_analyzer_cache: OutputChannel(artifact_type=TransformCache, producer_component_id=Transform, output_key=updated_analyzer_cache, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " pre_transform_schema: OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=pre_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " pre_transform_stats: OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=pre_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " post_transform_schema: OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=post_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " post_transform_stats: OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=post_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " post_transform_anomalies: OutputChannel(artifact_type=ExampleAnomalies, producer_component_id=Transform, output_key=post_transform_anomalies, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "transform = tfx.components.Transform(\n", " examples=example_gen.outputs['examples'],\n", " schema=schema_gen.outputs['schema'],\n", " module_file=os.path.abspath(_taxi_transform_module_file))\n", "context.run(transform)" ] }, { "cell_type": "markdown", "metadata": { "id": "fwAwb4rARRQ2" }, "source": [ "Let's examine the output artifacts of `Transform`. This component produces two types of outputs:\n", "\n", "* `transform_graph` is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).\n", "* `transformed_examples` represents the preprocessed training and evaluation data." ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.674541Z", "iopub.status.busy": "2024-08-02T09:19:29.673817Z", "iopub.status.idle": "2024-08-02T09:19:29.678701Z", "shell.execute_reply": "2024-08-02T09:19:29.678079Z" }, "id": "SClrAaEGR1O5" }, "outputs": [ { "data": { "text/plain": [ "{'transform_graph': OutputChannel(artifact_type=TransformGraph, producer_component_id=Transform, output_key=transform_graph, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'transformed_examples': OutputChannel(artifact_type=Examples, producer_component_id=Transform, output_key=transformed_examples, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'updated_analyzer_cache': OutputChannel(artifact_type=TransformCache, producer_component_id=Transform, output_key=updated_analyzer_cache, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'pre_transform_schema': OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=pre_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'pre_transform_stats': OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=pre_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'post_transform_schema': OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=post_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'post_transform_stats': OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=post_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'post_transform_anomalies': OutputChannel(artifact_type=ExampleAnomalies, producer_component_id=Transform, output_key=post_transform_anomalies, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)}" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "transform.outputs" ] }, { "cell_type": "markdown", "metadata": { "id": "vyFkBd9AR1sy" }, "source": [ "Take a peek at the `transform_graph` artifact. It points to a directory containing three subdirectories." ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.682250Z", "iopub.status.busy": "2024-08-02T09:19:29.681855Z", "iopub.status.idle": "2024-08-02T09:19:29.686715Z", "shell.execute_reply": "2024-08-02T09:19:29.686102Z" }, "id": "5tRw4DneR3i7" }, "outputs": [ { "data": { "text/plain": [ "['metadata', 'transformed_metadata', 'transform_fn']" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "train_uri = transform.outputs['transform_graph'].get()[0].uri\n", "os.listdir(train_uri)" ] }, { "cell_type": "markdown", "metadata": { "id": "4fqV54CIR6Pu" }, "source": [ "The `transformed_metadata` subdirectory contains the schema of the preprocessed data. The `transform_fn` subdirectory contains the actual preprocessing graph. The `metadata` subdirectory contains the schema of the original data.\n", "\n", "We can also take a look at the first three transformed examples:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.690240Z", "iopub.status.busy": "2024-08-02T09:19:29.689727Z", "iopub.status.idle": "2024-08-02T09:19:29.729098Z", "shell.execute_reply": "2024-08-02T09:19:29.728414Z" }, "id": "pwbW2zPKR_S4" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " int64_list {\n", " value: 8\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " int64_list {\n", " value: 9\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 0.061060599982738495\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " int64_list {\n", " value: 1\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " int64_list {\n", " value: 9\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: -0.15886740386486053\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " float_list {\n", " value: -0.7118487358093262\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 6\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 19\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 5\n", " }\n", " }\n", " }\n", "}\n", "\n", "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " int64_list {\n", " value: 9\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 1.2521240711212158\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " value: 60\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " int64_list {\n", " value: 3\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: 0.532160758972168\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " float_list {\n", " value: 0.5509493350982666\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 3\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 2\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 10\n", " }\n", " }\n", " }\n", "}\n", "\n", "features {\n", " feature {\n", " key: \"company\"\n", " value {\n", " int64_list {\n", " value: 48\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_community_area\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_latitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"dropoff_longitude\"\n", " value {\n", " int64_list {\n", " value: 9\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"fare\"\n", " value {\n", " float_list {\n", " value: 0.3873794376850128\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"payment_type\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_census_tract\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_community_area\"\n", " value {\n", " int64_list {\n", " value: 13\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_latitude\"\n", " value {\n", " int64_list {\n", " value: 9\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"pickup_longitude\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"tips\"\n", " value {\n", " int64_list {\n", " value: 0\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_miles\"\n", " value {\n", " float_list {\n", " value: 0.21955278515815735\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_seconds\"\n", " value {\n", " float_list {\n", " value: 0.0019067146349698305\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_day\"\n", " value {\n", " int64_list {\n", " value: 3\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_hour\"\n", " value {\n", " int64_list {\n", " value: 12\n", " }\n", " }\n", " }\n", " feature {\n", " key: \"trip_start_month\"\n", " value {\n", " int64_list {\n", " value: 11\n", " }\n", " }\n", " }\n", "}\n", "\n" ] } ], "source": [ "# Get the URI of the output artifact representing the transformed examples, which is a directory\n", "train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')\n", "\n", "# Get the list of files in this directory (all compressed TFRecord files)\n", "tfrecord_filenames = [os.path.join(train_uri, name)\n", " for name in os.listdir(train_uri)]\n", "\n", "# Create a `TFRecordDataset` to read these files\n", "dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type=\"GZIP\")\n", "\n", "# Iterate over the first 3 records and decode them.\n", "for tfrecord in dataset.take(3):\n", " serialized_example = tfrecord.numpy()\n", " example = tf.train.Example()\n", " example.ParseFromString(serialized_example)\n", " pp.pprint(example)" ] }, { "cell_type": "markdown", "metadata": { "id": "q_b_V6eN4f69" }, "source": [ "After the `Transform` component has transformed your data into features, and the next step is to train a model." ] }, { "cell_type": "markdown", "metadata": { "id": "OBJFtnl6lCg9" }, "source": [ "### Trainer\n", "The `Trainer` component will train a model that you define in TensorFlow (either using the Estimator API or the Keras API with [`model_to_estimator`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)).\n", "\n", "`Trainer` takes as input the schema from `SchemaGen`, the transformed data and graph from `Transform`, training parameters, as well as a module that contains user-defined model code.\n", "\n", "Let's see an example of user-defined model code below (for an introduction to the TensorFlow Estimator APIs, [see the tutorial](https://www.tensorflow.org/tutorials/estimator/premade)):" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.732568Z", "iopub.status.busy": "2024-08-02T09:19:29.732061Z", "iopub.status.idle": "2024-08-02T09:19:29.735430Z", "shell.execute_reply": "2024-08-02T09:19:29.734749Z" }, "id": "N1376oq04YJt" }, "outputs": [], "source": [ "_taxi_trainer_module_file = 'taxi_trainer.py'" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.738969Z", "iopub.status.busy": "2024-08-02T09:19:29.738695Z", "iopub.status.idle": "2024-08-02T09:19:29.747478Z", "shell.execute_reply": "2024-08-02T09:19:29.746778Z" }, "id": "nf9UuNng4YJu" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Writing taxi_trainer.py\n" ] } ], "source": [ "%%writefile {_taxi_trainer_module_file}\n", "\n", "import tensorflow as tf\n", "import tensorflow_model_analysis as tfma\n", "import tensorflow_transform as tft\n", "from tensorflow_transform.tf_metadata import schema_utils\n", "from tfx_bsl.tfxio import dataset_options\n", "\n", "import taxi_constants\n", "\n", "_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS\n", "_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS\n", "_VOCAB_SIZE = taxi_constants.VOCAB_SIZE\n", "_OOV_SIZE = taxi_constants.OOV_SIZE\n", "_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT\n", "_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS\n", "_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS\n", "_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES\n", "_LABEL_KEY = taxi_constants.LABEL_KEY\n", "\n", "\n", "# Tf.Transform considers these features as \"raw\"\n", "def _get_raw_feature_spec(schema):\n", " return schema_utils.schema_as_feature_spec(schema).feature_spec\n", "\n", "\n", "def _build_estimator(config, hidden_units=None, warm_start_from=None):\n", " \"\"\"Build an estimator for predicting the tipping behavior of taxi riders.\n", " Args:\n", " config: tf.estimator.RunConfig defining the runtime environment for the\n", " estimator (including model_dir).\n", " hidden_units: [int], the layer sizes of the DNN (input layer first)\n", " warm_start_from: Optional directory to warm start from.\n", " Returns:\n", " A dict of the following:\n", " - estimator: The estimator that will be used for training and eval.\n", " - train_spec: Spec for training.\n", " - eval_spec: Spec for eval.\n", " - eval_input_receiver_fn: Input function for eval.\n", " \"\"\"\n", " real_valued_columns = [\n", " tf.feature_column.numeric_column(key, shape=())\n", " for key in _DENSE_FLOAT_FEATURE_KEYS\n", " ]\n", " categorical_columns = [\n", " tf.feature_column.categorical_column_with_identity(\n", " key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)\n", " for key in _VOCAB_FEATURE_KEYS\n", " ]\n", " categorical_columns += [\n", " tf.feature_column.categorical_column_with_identity(\n", " key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)\n", " for key in _BUCKET_FEATURE_KEYS\n", " ]\n", " categorical_columns += [\n", " tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension\n", " key,\n", " num_buckets=num_buckets,\n", " default_value=0) for key, num_buckets in zip(\n", " _CATEGORICAL_FEATURE_KEYS,\n", " _MAX_CATEGORICAL_FEATURE_VALUES)\n", " ]\n", " return tf.estimator.DNNLinearCombinedClassifier(\n", " config=config,\n", " linear_feature_columns=categorical_columns,\n", " dnn_feature_columns=real_valued_columns,\n", " dnn_hidden_units=hidden_units or [100, 70, 50, 25],\n", " warm_start_from=warm_start_from)\n", "\n", "\n", "def _example_serving_receiver_fn(tf_transform_graph, schema):\n", " \"\"\"Build the serving in inputs.\n", " Args:\n", " tf_transform_graph: A TFTransformOutput.\n", " schema: the schema of the input data.\n", " Returns:\n", " Tensorflow graph which parses examples, applying tf-transform to them.\n", " \"\"\"\n", " raw_feature_spec = _get_raw_feature_spec(schema)\n", " raw_feature_spec.pop(_LABEL_KEY)\n", "\n", " raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n", " raw_feature_spec, default_batch_size=None)\n", " serving_input_receiver = raw_input_fn()\n", "\n", " transformed_features = tf_transform_graph.transform_raw_features(\n", " serving_input_receiver.features)\n", "\n", " return tf.estimator.export.ServingInputReceiver(\n", " transformed_features, serving_input_receiver.receiver_tensors)\n", "\n", "\n", "def _eval_input_receiver_fn(tf_transform_graph, schema):\n", " \"\"\"Build everything needed for the tf-model-analysis to run the model.\n", " Args:\n", " tf_transform_graph: A TFTransformOutput.\n", " schema: the schema of the input data.\n", " Returns:\n", " EvalInputReceiver function, which contains:\n", " - Tensorflow graph which parses raw untransformed features, applies the\n", " tf-transform preprocessing operators.\n", " - Set of raw, untransformed features.\n", " - Label against which predictions will be compared.\n", " \"\"\"\n", " # Notice that the inputs are raw features, not transformed features here.\n", " raw_feature_spec = _get_raw_feature_spec(schema)\n", "\n", " serialized_tf_example = tf.compat.v1.placeholder(\n", " dtype=tf.string, shape=[None], name='input_example_tensor')\n", "\n", " # Add a parse_example operator to the tensorflow graph, which will parse\n", " # raw, untransformed, tf examples.\n", " features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)\n", "\n", " # Now that we have our raw examples, process them through the tf-transform\n", " # function computed during the preprocessing step.\n", " transformed_features = tf_transform_graph.transform_raw_features(\n", " features)\n", "\n", " # The key name MUST be 'examples'.\n", " receiver_tensors = {'examples': serialized_tf_example}\n", "\n", " # NOTE: Model is driven by transformed features (since training works on the\n", " # materialized output of TFT, but slicing will happen on raw features.\n", " features.update(transformed_features)\n", "\n", " return tfma.export.EvalInputReceiver(\n", " features=features,\n", " receiver_tensors=receiver_tensors,\n", " labels=transformed_features[_LABEL_KEY])\n", "\n", "\n", "def _input_fn(file_pattern, data_accessor, tf_transform_output, batch_size=200):\n", " \"\"\"Generates features and label for tuning/training.\n", "\n", " Args:\n", " file_pattern: List of paths or patterns of input tfrecord files.\n", " data_accessor: DataAccessor for converting input to RecordBatch.\n", " tf_transform_output: A TFTransformOutput.\n", " batch_size: representing the number of consecutive elements of returned\n", " dataset to combine in a single batch\n", "\n", " Returns:\n", " A dataset that contains (features, indices) tuple where features is a\n", " dictionary of Tensors, and indices is a single Tensor of label indices.\n", " \"\"\"\n", " return data_accessor.tf_dataset_factory(\n", " file_pattern,\n", " dataset_options.TensorFlowDatasetOptions(\n", " batch_size=batch_size, label_key=_LABEL_KEY),\n", " tf_transform_output.transformed_metadata.schema)\n", "\n", "\n", "# TFX will call this function\n", "def trainer_fn(trainer_fn_args, schema):\n", " \"\"\"Build the estimator using the high level API.\n", " Args:\n", " trainer_fn_args: Holds args used to train the model as name/value pairs.\n", " schema: Holds the schema of the training examples.\n", " Returns:\n", " A dict of the following:\n", " - estimator: The estimator that will be used for training and eval.\n", " - train_spec: Spec for training.\n", " - eval_spec: Spec for eval.\n", " - eval_input_receiver_fn: Input function for eval.\n", " \"\"\"\n", " # Number of nodes in the first layer of the DNN\n", " first_dnn_layer_size = 100\n", " num_dnn_layers = 4\n", " dnn_decay_factor = 0.7\n", "\n", " train_batch_size = 40\n", " eval_batch_size = 40\n", "\n", " tf_transform_graph = tft.TFTransformOutput(trainer_fn_args.transform_output)\n", "\n", " train_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n", " trainer_fn_args.train_files,\n", " trainer_fn_args.data_accessor,\n", " tf_transform_graph,\n", " batch_size=train_batch_size)\n", "\n", " eval_input_fn = lambda: _input_fn( # pylint: disable=g-long-lambda\n", " trainer_fn_args.eval_files,\n", " trainer_fn_args.data_accessor,\n", " tf_transform_graph,\n", " batch_size=eval_batch_size)\n", "\n", " train_spec = tf.estimator.TrainSpec( # pylint: disable=g-long-lambda\n", " train_input_fn,\n", " max_steps=trainer_fn_args.train_steps)\n", "\n", " serving_receiver_fn = lambda: _example_serving_receiver_fn( # pylint: disable=g-long-lambda\n", " tf_transform_graph, schema)\n", "\n", " exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)\n", " eval_spec = tf.estimator.EvalSpec(\n", " eval_input_fn,\n", " steps=trainer_fn_args.eval_steps,\n", " exporters=[exporter],\n", " name='chicago-taxi-eval')\n", "\n", " run_config = tf.estimator.RunConfig(\n", " save_checkpoints_steps=999, keep_checkpoint_max=1)\n", "\n", " run_config = run_config.replace(model_dir=trainer_fn_args.serving_model_dir)\n", "\n", " estimator = _build_estimator(\n", " # Construct layers sizes with exponetial decay\n", " hidden_units=[\n", " max(2, int(first_dnn_layer_size * dnn_decay_factor**i))\n", " for i in range(num_dnn_layers)\n", " ],\n", " config=run_config,\n", " warm_start_from=trainer_fn_args.base_model)\n", "\n", " # Create an input receiver for TFMA processing\n", " receiver_fn = lambda: _eval_input_receiver_fn( # pylint: disable=g-long-lambda\n", " tf_transform_graph, schema)\n", "\n", " return {\n", " 'estimator': estimator,\n", " 'train_spec': train_spec,\n", " 'eval_spec': eval_spec,\n", " 'eval_input_receiver_fn': receiver_fn\n", " }" ] }, { "cell_type": "markdown", "metadata": { "id": "GY4yTRaX4YJx" }, "source": [ "Now, we pass in this model code to the `Trainer` component and run it to train the model." ] }, { "cell_type": "code", "execution_count": 29, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:19:29.750821Z", "iopub.status.busy": "2024-08-02T09:19:29.750573Z", "iopub.status.idle": "2024-08-02T09:22:05.758939Z", "shell.execute_reply": "2024-08-02T09:22:05.758252Z" }, "id": "429-vvCWibO0" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:`custom_executor_spec` is deprecated. Please customize component directly.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py' (including modules: ['taxi_trainer', 'taxi_transform', 'taxi_constants']).\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:User module package has hash fingerprint version e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmpfs/tmp/tmpxoj_le3k/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmpfs/tmp/tmpuykvsdzd', '--dist-dir', '/tmpfs/tmp/tmpyo330uub']\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.\n", "!!\n", "\n", " ********************************************************************************\n", " Please avoid running ``setup.py`` directly.\n", " Instead, use pypa/build, pypa/installer or other\n", " standards-based tools.\n", "\n", " See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.\n", " ********************************************************************************\n", "\n", "!!\n", " self.initialize_options()\n", "INFO:absl:Successfully built user code wheel distribution at '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl'; target user module is 'taxi_trainer'.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Full user module path is 'taxi_trainer@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for Trainer\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "running bdist_wheel\n", "running build\n", "running build_py\n", "creating build\n", "creating build/lib\n", "copying taxi_trainer.py -> build/lib\n", "copying taxi_transform.py -> build/lib\n", "copying taxi_constants.py -> build/lib\n", "installing to /tmpfs/tmp/tmpuykvsdzd\n", "running install\n", "running install_lib\n", "copying build/lib/taxi_trainer.py -> /tmpfs/tmp/tmpuykvsdzd\n", "copying build/lib/taxi_transform.py -> /tmpfs/tmp/tmpuykvsdzd\n", "copying build/lib/taxi_constants.py -> /tmpfs/tmp/tmpuykvsdzd\n", "running install_egg_info\n", "running egg_info\n", "creating tfx_user_code_Trainer.egg-info\n", "writing tfx_user_code_Trainer.egg-info/PKG-INFO\n", "writing dependency_links to tfx_user_code_Trainer.egg-info/dependency_links.txt\n", "writing top-level names to tfx_user_code_Trainer.egg-info/top_level.txt\n", "writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'\n", "reading manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'\n", "writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'\n", "Copying tfx_user_code_Trainer.egg-info to /tmpfs/tmp/tmpuykvsdzd/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3.9.egg-info\n", "running install_scripts\n", "creating /tmpfs/tmp/tmpuykvsdzd/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.dist-info/WHEEL\n", "creating '/tmpfs/tmp/tmpyo330uub/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl' and adding '/tmpfs/tmp/tmpuykvsdzd' to it\n", "adding 'taxi_constants.py'\n", "adding 'taxi_trainer.py'\n", "adding 'taxi_transform.py'\n", "adding 'tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.dist-info/METADATA'\n", "adding 'tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.dist-info/WHEEL'\n", "adding 'tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.dist-info/top_level.txt'\n", "adding 'tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618.dist-info/RECORD'\n", "removing /tmpfs/tmp/tmpuykvsdzd\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for Trainer\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Train on the 'train' split when train_args.splits is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Evaluate on the 'eval' split when eval_args.splits is not set.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:udf_utils.get_fn {'train_args': '{\\n \"num_steps\": 10000\\n}', 'eval_args': '{\\n \"num_steps\": 5000\\n}', 'module_file': None, 'run_fn': None, 'trainer_fn': None, 'custom_config': 'null', 'module_path': 'taxi_trainer@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl'} 'trainer_fn'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Installing '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl' to a temporary directory.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmpfs/tmp/tmpqdwzchbf', '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Processing /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Successfully installed '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl'.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Installing collected packages: tfx-user-code-Trainer\n", "Successfully installed tfx-user-code-Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618\n", "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:188: TrainSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:195: FinalExporter.__init__ (from tensorflow_estimator.python.estimator.exporter) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:196: EvalSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:202: RunConfig.__init__ (from tensorflow_estimator.python.estimator.run_config) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:41: numeric_column (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:45: categorical_column_with_identity (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use Keras preprocessing layers instead, either directly or via the `tf.keras.utils.FeatureSpace` utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:62: DNNLinearCombinedClassifierV2.__init__ (from tensorflow_estimator.python.estimator.canned.dnn_linear_combined) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/head/head_utils.py:54: BinaryClassHead.__init__ (from tensorflow_estimator.python.estimator.head.binary_class_head) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/canned/dnn_linear_combined.py:586: Estimator.__init__ (from tensorflow_estimator.python.estimator.estimator) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Using config: {'_model_dir': '/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true\n", "graph_options {\n", " rewrite_options {\n", " meta_optimizer_iterations: ONE\n", " }\n", "}\n", ", '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Training model.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tfx/components/trainer/executor.py:270: train_and_evaluate (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Not using Distribute Coordinator.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Running training and evaluation locally (non-distributed).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:385: StopAtStepHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tfx_bsl/tfxio/tf_example_record.py:343: parse_example_dataset (from tensorflow.python.data.experimental.ops.parsing_ops) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use `tf.data.Dataset.map(tf.io.parse_example(...))` instead.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/keras/src/optimizers/legacy/adagrad.py:93: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Call initializer instance with the dtype argument instead of passing it to the constructor\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/model_fn.py:250: EstimatorSpec.__new__ (from tensorflow_estimator.python.estimator.model_fn) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1416: NanTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1419: LoggingTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/basic_session_run_hooks.py:232: SecondOrStepTimer.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1456: CheckpointSaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Create CheckpointSaverHook.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:579: StepCounterHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:586: SummarySaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Graph was finalized.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Running local_init_op.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2024-08-02 09:19:34.413317: W tensorflow/core/common_runtime/type_inference.cc:339] Type inference failed. This indicates an invalid graph that escaped type checking. Error message: INVALID_ARGUMENT: expected compatible input types, but input 1:\n", "type_id: TFT_OPTIONAL\n", "args {\n", " type_id: TFT_PRODUCT\n", " args {\n", " type_id: TFT_TENSOR\n", " args {\n", " type_id: TFT_INT64\n", " }\n", " }\n", "}\n", " is neither a subtype nor a supertype of the combined inputs preceding it:\n", "type_id: TFT_OPTIONAL\n", "args {\n", " type_id: TFT_PRODUCT\n", " args {\n", " type_id: TFT_TENSOR\n", " args {\n", " type_id: TFT_INT32\n", " }\n", " }\n", "}\n", "\n", "\tfor Tuple type infernce function 0\n", "\twhile inferring type of node 'dnn/zero_fraction/cond/output/_18'\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done running local_init_op.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 0 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1455: SessionRunArgs.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1454: SessionRunContext.__init__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1474: SessionRunValues.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.68345356, step = 0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 94.6219\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.5876302, step = 100 (1.058 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.837\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.54258287, step = 200 (0.789 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.101\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.42309752, step = 300 (0.780 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.804\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.49153805, step = 400 (0.789 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.62\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4945454, step = 500 (0.784 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.584\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.5304156, step = 600 (0.784 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 130.578\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.5257536, step = 700 (0.766 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.19\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.39649287, step = 800 (0.762 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.381\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.48203364, step = 900 (0.761 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 999 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/saver.py:1067: remove_checkpoint (from tensorflow.python.checkpoint.checkpoint_management) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use standard file APIs to delete files with this prefix.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Starting evaluation at 2024-08-02T09:19:47\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/evaluation.py:260: FinalOpsHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Graph was finalized.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-999\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Running local_init_op.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done running local_init_op.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [1000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [1500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [2000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [2500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [3000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [3500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [4000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [4500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [5000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Inference Time : 30.09666s\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Finished evaluation at 2024-08-02-09:20:17\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving dict for global step 999: accuracy = 0.77127, accuracy_baseline = 0.77127, auc = 0.9157473, auc_precision_recall = 0.63793385, average_loss = 0.4631773, global_step = 999, label/mean = 0.22873, loss = 0.46317753, precision = 0.0, prediction/mean = 0.24352272, recall = 0.0\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-999\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 3.0638\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4464565, step = 1000 (32.639 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.029\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.34790123, step = 1100 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.743\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3965652, step = 1200 (0.783 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.222\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.41430426, step = 1300 (0.792 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.602\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.5473569, step = 1400 (0.790 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.742\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4605174, step = 1500 (0.783 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.617\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4138917, step = 1600 (0.796 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.248\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4090398, step = 1700 (0.774 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.093\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.43102828, step = 1800 (0.781 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.057\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.46358317, step = 1900 (0.793 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 1998 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 109.496\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4306933, step = 2000 (0.913 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.816\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4857188, step = 2100 (0.777 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 130.685\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3482414, step = 2200 (0.765 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.638\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.43326974, step = 2300 (0.784 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.4\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.356163, step = 2400 (0.791 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.647\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40226907, step = 2500 (0.790 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.972\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.45428342, step = 2600 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.583\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.39020246, step = 2700 (0.790 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.646\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40562025, step = 2800 (0.777 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.918\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.43240872, step = 2900 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 2997 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 109.498\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.46481556, step = 3000 (0.913 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.363\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40305233, step = 3100 (0.785 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 124.665\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40317774, step = 3200 (0.802 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.368\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3879068, step = 3300 (0.791 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.044\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.45533687, step = 3400 (0.787 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.924\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.36743593, step = 3500 (0.782 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.528\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3420141, step = 3600 (0.784 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.037\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37507004, step = 3700 (0.787 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.288\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4554777, step = 3800 (0.798 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.224\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3540257, step = 3900 (0.786 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 3996 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 107.582\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.39362353, step = 4000 (0.929 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.364\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37329096, step = 4100 (0.780 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.761\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.41019115, step = 4200 (0.789 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.749\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4272521, step = 4300 (0.796 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.224\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3582986, step = 4400 (0.792 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 124.553\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.42920828, step = 4500 (0.803 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.126\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.399159, step = 4600 (0.793 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.574\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.337004, step = 4700 (0.790 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.435\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3079869, step = 4800 (0.779 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.011\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.2898496, step = 4900 (0.781 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 4995 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 107.802\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.30157006, step = 5000 (0.927 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.486\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.35911578, step = 5100 (0.779 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.123\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.42201638, step = 5200 (0.781 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.168\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.36438444, step = 5300 (0.792 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.968\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3075903, step = 5400 (0.781 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.923\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37466988, step = 5500 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.858\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.45140833, step = 5600 (0.795 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.066\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.31925184, step = 5700 (0.800 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.717\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3493313, step = 5800 (0.777 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.039\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40627345, step = 5900 (0.775 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 5994 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 108.053\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.28857297, step = 6000 (0.925 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.484\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.27557686, step = 6100 (0.797 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.448\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.26575455, step = 6200 (0.791 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.276\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4027404, step = 6300 (0.780 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.328\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37504548, step = 6400 (0.779 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.74\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40821376, step = 6500 (0.783 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.458\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.35266423, step = 6600 (0.779 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.952\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.31729963, step = 6700 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.488\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3718481, step = 6800 (0.785 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.73\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.40504962, step = 6900 (0.789 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 6993 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 108.832\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.25404018, step = 7000 (0.919 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.112\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.2731769, step = 7100 (0.793 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.987\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.2747486, step = 7200 (0.775 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.168\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.43343297, step = 7300 (0.786 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.696\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.35623556, step = 7400 (0.789 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.879\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.39246577, step = 7500 (0.795 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.881\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.400997, step = 7600 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 128.105\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4201835, step = 7700 (0.781 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 126.947\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37048712, step = 7800 (0.788 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 125.791\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3623591, step = 7900 (0.795 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 7992 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 108.946\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.36777845, step = 8000 (0.918 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.235\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.2914999, step = 8100 (0.774 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.476\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.30548525, step = 8200 (0.772 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.06\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3804708, step = 8300 (0.775 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 127.645\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.34999222, step = 8400 (0.784 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.021\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37043446, step = 8500 (0.763 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 130.323\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.31763867, step = 8600 (0.767 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 130.619\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3240898, step = 8700 (0.765 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.161\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3534736, step = 8800 (0.763 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.406\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.352484, step = 8900 (0.761 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 8991 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 109.642\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.27267218, step = 9000 (0.912 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.661\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3245061, step = 9100 (0.760 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.599\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.43071085, step = 9200 (0.760 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.755\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.31766015, step = 9300 (0.759 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 132.721\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37132785, step = 9400 (0.753 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 133.55\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.32427594, step = 9500 (0.749 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 133.19\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.4069633, step = 9600 (0.751 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 131.414\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3321201, step = 9700 (0.761 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.794\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.3150528, step = 9800 (0.770 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:global_step/sec: 129.832\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:loss = 0.37696955, step = 9900 (0.770 sec)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 9990 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving checkpoints for 10000 into /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Starting evaluation at 2024-08-02T09:21:30\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Graph was finalized.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-10000\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Running local_init_op.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done running local_init_op.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [1000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [1500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [2000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [2500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [3000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [3500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [4000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [4500/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Evaluation [5000/5000]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Inference Time : 30.29653s\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Finished evaluation at 2024-08-02-09:22:01\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving dict for global step 10000: accuracy = 0.78815, accuracy_baseline = 0.77124, auc = 0.93310654, auc_precision_recall = 0.70294404, average_loss = 0.34523568, global_step = 10000, label/mean = 0.22876, loss = 0.34523627, precision = 0.6943678, prediction/mean = 0.23057358, recall = 0.13203794\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-10000\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Performing the final export in the end of training.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py:81: build_parsing_serving_input_receiver_fn (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/export/export.py:312: ServingInputReceiver.__new__ (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:struct2tensor is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_decision_forests is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_text is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/head/base_head.py:786: ClassificationOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/head/binary_class_head.py:561: RegressionOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/head/binary_class_head.py:563: PredictOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:168: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:83: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Classify: ['serving_default', 'classification']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Regress: ['regression']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Train: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Eval: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-10000\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets added to graph.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/export/chicago-taxi/temp-1722590521/assets\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/export/chicago-taxi/temp-1722590521/saved_model.pb\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Loss for final step: 0.33445308.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Training complete. Model written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving. ModelRun written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Exporting eval_savedmodel for TFMA.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature company has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature dropoff_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature fare has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature payment_type has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_census_tract has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_community_area has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_latitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature pickup_longitude has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature tips has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_miles has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_seconds has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_day has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_hour has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_month has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Feature trip_start_timestamp has no shape. Setting to varlen_sparse_tensor.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:struct2tensor is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_decision_forests is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:tensorflow_text is not available.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Done calling model_fn.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:345: _SupervisedOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use tf.keras instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Classify: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Regress: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Predict: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Train: None\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:Export includes no default signature!\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-Serving/model.ckpt-10000\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets added to graph.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-TFMA/temp-1722590523/assets\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-TFMA/temp-1722590523/saved_model.pb\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Exported eval_savedmodel to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6/Format-TFMA.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure /serving_model_dir/saved_model.pb\"\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Serving model copied to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6/Format-Serving.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure /eval_model_dir/saved_model.pb\"\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Eval model copied to: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6/Format-TFMA.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for Trainer\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe6583df640
.execution_id6
.component\n", "\n", "
Trainer at 0x7fe65840f7c0
.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe85fada5e0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5) at 0x7fe8807f6a00
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5
.span0
.split_names["eval", "train"]
.version0
['transform_graph']\n", "\n", "
Channel of type 'TransformGraph' (1 artifact) at 0x7fe85fadaac0
.type_nameTransformGraph
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformGraph' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5) at 0x7fe85fae0280
.type<class 'tfx.types.standard_artifacts.TransformGraph'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.outputs
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
['model_run']\n", "\n", "
Channel of type 'ModelRun' (1 artifact) at 0x7fe6583df550
.type_nameModelRun
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelRun' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6) at 0x7fe6583f8370
.type<class 'tfx.types.standard_artifacts.ModelRun'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6
.exec_properties
['train_args']{\n", " "num_steps": 10000\n", "}
['eval_args']{\n", " "num_steps": 5000\n", "}
['module_file']None
['run_fn']None
['trainer_fn']None
['custom_config']null
['module_path']taxi_trainer@/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/_wheels/tfx_user_code_Trainer-0.0+e337a512821685b6d91445dbd0628b47de0e4c751e9e54edf78bcf0866309618-py3-none-any.whl
.component.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe85fada5e0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5) at 0x7fe8807f6a00
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transformed_examples/5
.span0
.split_names["eval", "train"]
.version0
['transform_graph']\n", "\n", "
Channel of type 'TransformGraph' (1 artifact) at 0x7fe85fadaac0
.type_nameTransformGraph
._artifacts
[0]\n", "\n", "
Artifact of type 'TransformGraph' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5) at 0x7fe85fae0280
.type<class 'tfx.types.standard_artifacts.TransformGraph'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Transform/transform_graph/5
['schema']\n", "\n", "
Channel of type 'Schema' (1 artifact) at 0x7fe85fad56a0
.type_nameSchema
._artifacts
[0]\n", "\n", "
Artifact of type 'Schema' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3) at 0x7fe85c14d370
.type<class 'tfx.types.standard_artifacts.Schema'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/SchemaGen/schema/3
.component.outputs
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
['model_run']\n", "\n", "
Channel of type 'ModelRun' (1 artifact) at 0x7fe6583df550
.type_nameModelRun
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelRun' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6) at 0x7fe6583f8370
.type<class 'tfx.types.standard_artifacts.ModelRun'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model_run/6
" ], "text/plain": [ "ExecutionResult(\n", " component_id: Trainer\n", " execution_id: 6\n", " outputs:\n", " model: OutputChannel(artifact_type=Model, producer_component_id=Trainer, output_key=model, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " model_run: OutputChannel(artifact_type=ModelRun, producer_component_id=Trainer, output_key=model_run, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from tfx.components.trainer.executor import Executor\n", "from tfx.dsl.components.base import executor_spec\n", "\n", "trainer = tfx.components.Trainer(\n", " module_file=os.path.abspath(_taxi_trainer_module_file),\n", " custom_executor_spec=executor_spec.ExecutorClassSpec(Executor),\n", " examples=transform.outputs['transformed_examples'],\n", " schema=schema_gen.outputs['schema'],\n", " transform_graph=transform.outputs['transform_graph'],\n", " train_args=tfx.proto.TrainArgs(num_steps=10000),\n", " eval_args=tfx.proto.EvalArgs(num_steps=5000))\n", "context.run(trainer)" ] }, { "cell_type": "markdown", "metadata": { "id": "6Cql1G35StJp" }, "source": [ "#### Analyze Training with TensorBoard\n", "Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bXe62WE0S0Ek" }, "outputs": [], "source": [ "# Get the URI of the output artifact representing the training logs, which is a directory\n", "model_run_dir = trainer.outputs['model_run'].get()[0].uri\n", "\n", "%load_ext tensorboard\n", "%tensorboard --logdir {model_run_dir}" ] }, { "cell_type": "markdown", "metadata": { "id": "FmPftrv0lEQy" }, "source": [ "### Evaluator\n", "The `Evaluator` component computes model performance metrics over the evaluation set. It uses the [TensorFlow Model Analysis](https://www.tensorflow.org/tfx/model_analysis/get_started) library. The `Evaluator` can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the `Evaluator` automatically will label the model as \"good\".\n", "\n", "`Evaluator` will take as input the data from `ExampleGen`, the trained model from `Trainer`, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:05.762761Z", "iopub.status.busy": "2024-08-02T09:22:05.762520Z", "iopub.status.idle": "2024-08-02T09:22:05.768151Z", "shell.execute_reply": "2024-08-02T09:22:05.767508Z" }, "id": "fVhfzzh9PDEx" }, "outputs": [], "source": [ "eval_config = tfma.EvalConfig(\n", " model_specs=[\n", " # Using signature 'eval' implies the use of an EvalSavedModel. To use\n", " # a serving model remove the signature to defaults to 'serving_default'\n", " # and add a label_key.\n", " tfma.ModelSpec(signature_name='eval')\n", " ],\n", " metrics_specs=[\n", " tfma.MetricsSpec(\n", " # The metrics added here are in addition to those saved with the\n", " # model (assuming either a keras model or EvalSavedModel is used).\n", " # Any metrics added into the saved model (for example using\n", " # model.compile(..., metrics=[...]), etc) will be computed\n", " # automatically.\n", " metrics=[\n", " tfma.MetricConfig(class_name='ExampleCount')\n", " ],\n", " # To add validation thresholds for metrics saved with the model,\n", " # add them keyed by metric name to the thresholds map.\n", " thresholds = {\n", " 'accuracy': tfma.MetricThreshold(\n", " value_threshold=tfma.GenericValueThreshold(\n", " lower_bound={'value': 0.5}),\n", " # Change threshold will be ignored if there is no\n", " # baseline model resolved from MLMD (first run).\n", " change_threshold=tfma.GenericChangeThreshold(\n", " direction=tfma.MetricDirection.HIGHER_IS_BETTER,\n", " absolute={'value': -1e-10}))\n", " }\n", " )\n", " ],\n", " slicing_specs=[\n", " # An empty slice spec means the overall slice, i.e. the whole dataset.\n", " tfma.SlicingSpec(),\n", " # Data can be sliced along a feature column. In this case, data is\n", " # sliced along feature column trip_start_hour.\n", " tfma.SlicingSpec(feature_keys=['trip_start_hour'])\n", " ])" ] }, { "cell_type": "markdown", "metadata": { "id": "9mBdKH1F8JuT" }, "source": [ "Next, we give this configuration to `Evaluator` and run it." ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:05.771310Z", "iopub.status.busy": "2024-08-02T09:22:05.771077Z", "iopub.status.idle": "2024-08-02T09:22:16.063506Z", "shell.execute_reply": "2024-08-02T09:22:16.062829Z" }, "id": "Zjcx8g6mihSt" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for latest_blessed_model_resolver\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for latest_blessed_model_resolver\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for Evaluator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for Evaluator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:udf_utils.get_fn {'eval_config': '{\\n \"metrics_specs\": [\\n {\\n \"metrics\": [\\n {\\n \"class_name\": \"ExampleCount\"\\n }\\n ],\\n \"thresholds\": {\\n \"accuracy\": {\\n \"change_threshold\": {\\n \"absolute\": -1e-10,\\n \"direction\": \"HIGHER_IS_BETTER\"\\n },\\n \"value_threshold\": {\\n \"lower_bound\": 0.5\\n }\\n }\\n }\\n }\\n ],\\n \"model_specs\": [\\n {\\n \"signature_name\": \"eval\"\\n }\\n ],\\n \"slicing_specs\": [\\n {},\\n {\\n \"feature_keys\": [\\n \"trip_start_hour\"\\n ]\\n }\\n ]\\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_eval_shared_model'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=\n", "model_specs {\n", " signature_name: \"eval\"\n", "}\n", "slicing_specs {\n", "}\n", "slicing_specs {\n", " feature_keys: \"trip_start_hour\"\n", "}\n", "metrics_specs {\n", " metrics {\n", " class_name: \"ExampleCount\"\n", " }\n", " thresholds {\n", " key: \"accuracy\"\n", " value {\n", " value_threshold {\n", " lower_bound {\n", " value: 0.5\n", " }\n", " }\n", " }\n", " }\n", "}\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Using /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6/Format-TFMA as model.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named \"keras_metadata.pb\" in the SavedModel directory.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:The 'example_splits' parameter is not set, using 'eval' split.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Evaluating model.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:udf_utils.get_fn {'eval_config': '{\\n \"metrics_specs\": [\\n {\\n \"metrics\": [\\n {\\n \"class_name\": \"ExampleCount\"\\n }\\n ],\\n \"thresholds\": {\\n \"accuracy\": {\\n \"change_threshold\": {\\n \"absolute\": -1e-10,\\n \"direction\": \"HIGHER_IS_BETTER\"\\n },\\n \"value_threshold\": {\\n \"lower_bound\": 0.5\\n }\\n }\\n }\\n }\\n ],\\n \"model_specs\": [\\n {\\n \"signature_name\": \"eval\"\\n }\\n ],\\n \"slicing_specs\": [\\n {},\\n {\\n \"feature_keys\": [\\n \"trip_start_hour\"\\n ]\\n }\\n ]\\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_extractors'\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=\n", "model_specs {\n", " signature_name: \"eval\"\n", "}\n", "slicing_specs {\n", "}\n", "slicing_specs {\n", " feature_keys: \"trip_start_hour\"\n", "}\n", "metrics_specs {\n", " metrics {\n", " class_name: \"ExampleCount\"\n", " }\n", " model_names: \"\"\n", " thresholds {\n", " key: \"accuracy\"\n", " value {\n", " value_threshold {\n", " lower_bound {\n", " value: 0.5\n", " }\n", " }\n", " }\n", " }\n", "}\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=\n", "model_specs {\n", " signature_name: \"eval\"\n", "}\n", "slicing_specs {\n", "}\n", "slicing_specs {\n", " feature_keys: \"trip_start_hour\"\n", "}\n", "metrics_specs {\n", " metrics {\n", " class_name: \"ExampleCount\"\n", " }\n", " model_names: \"\"\n", " thresholds {\n", " key: \"accuracy\"\n", " value {\n", " value_threshold {\n", " lower_bound {\n", " value: 0.5\n", " }\n", " }\n", " }\n", " }\n", "}\n", "\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:eval_shared_models have model_types: {'tfma_eval'}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config=\n", "model_specs {\n", " signature_name: \"eval\"\n", "}\n", "slicing_specs {\n", "}\n", "slicing_specs {\n", " feature_keys: \"trip_start_hour\"\n", "}\n", "metrics_specs {\n", " metrics {\n", " class_name: \"ExampleCount\"\n", " }\n", " model_names: \"\"\n", " thresholds {\n", " key: \"accuracy\"\n", " value {\n", " value_threshold {\n", " lower_bound {\n", " value: 0.5\n", " }\n", " }\n", " }\n", " }\n", "}\n", "\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_model_analysis/eval_saved_model/load.py:163: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use `tf.saved_model.load` instead.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6/Format-TFMA/variables/variables\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "2024-08-02 09:22:08.724576: W tensorflow/c/c_api.cc:305] Operation '{name:'head/metrics/true_positives_1/Assign' id:1343 op device:{requested: '', assigned: ''} def:{{{node head/metrics/true_positives_1/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](head/metrics/true_positives_1, head/metrics/true_positives_1/Initializer/zeros)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.\n", "2024-08-02 09:22:08.909600: W tensorflow/c/c_api.cc:305] Operation '{name:'head/metrics/true_positives_1/Assign' id:1343 op device:{requested: '', assigned: ''} def:{{{node head/metrics/true_positives_1/Assign}} = AssignVariableOp[_has_manual_control_dependencies=true, dtype=DT_FLOAT, validate_shape=false](head/metrics/true_positives_1, head/metrics/true_positives_1/Initializer/zeros)}}' was changed by setting attribute after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Evaluation complete. Results written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Checking validation results.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:112: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.\n", "Instructions for updating:\n", "Use eager execution and: \n", "`tf.data.TFRecordDataset(path)`\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Blessing result True written to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for Evaluator\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe65848d8b0
.execution_id8
.component\n", "\n", "
Evaluator at 0x7fe65848cc70
.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
.outputs
['evaluation']\n", "\n", "
Channel of type 'ModelEvaluation' (1 artifact) at 0x7fe664764d90
.type_nameModelEvaluation
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelEvaluation' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8) at 0x7fe6642224f0
.type<class 'tfx.types.standard_artifacts.ModelEvaluation'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8
['blessing']\n", "\n", "
Channel of type 'ModelBlessing' (1 artifact) at 0x7fe6584e3ca0
.type_nameModelBlessing
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelBlessing' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8) at 0x7fe664222460
.type<class 'tfx.types.standard_artifacts.ModelBlessing'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8
.exec_properties
['eval_config']{\n", " "metrics_specs": [\n", " {\n", " "metrics": [\n", " {\n", " "class_name": "ExampleCount"\n", " }\n", " ],\n", " "thresholds": {\n", " "accuracy": {\n", " "change_threshold": {\n", " "absolute": -1e-10,\n", " "direction": "HIGHER_IS_BETTER"\n", " },\n", " "value_threshold": {\n", " "lower_bound": 0.5\n", " }\n", " }\n", " }\n", " }\n", " ],\n", " "model_specs": [\n", " {\n", " "signature_name": "eval"\n", " }\n", " ],\n", " "slicing_specs": [\n", " {},\n", " {\n", " "feature_keys": [\n", " "trip_start_hour"\n", " ]\n", " }\n", " ]\n", "}
['feature_slicing_spec']None
['fairness_indicator_thresholds']null
['example_splits']null
['module_file']None
['module_path']None
.component.inputs
['examples']\n", "\n", "
Channel of type 'Examples' (1 artifact) at 0x7fe7381467f0
.type_nameExamples
._artifacts
[0]\n", "\n", "
Artifact of type 'Examples' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1) at 0x7fe8807f6f40
.type<class 'tfx.types.standard_artifacts.Examples'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/CsvExampleGen/examples/1
.span0
.split_names["train", "eval"]
.version0
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
.component.outputs
['evaluation']\n", "\n", "
Channel of type 'ModelEvaluation' (1 artifact) at 0x7fe664764d90
.type_nameModelEvaluation
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelEvaluation' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8) at 0x7fe6642224f0
.type<class 'tfx.types.standard_artifacts.ModelEvaluation'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8
['blessing']\n", "\n", "
Channel of type 'ModelBlessing' (1 artifact) at 0x7fe6584e3ca0
.type_nameModelBlessing
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelBlessing' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8) at 0x7fe664222460
.type<class 'tfx.types.standard_artifacts.ModelBlessing'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8
" ], "text/plain": [ "ExecutionResult(\n", " component_id: Evaluator\n", " execution_id: 8\n", " outputs:\n", " evaluation: OutputChannel(artifact_type=ModelEvaluation, producer_component_id=Evaluator, output_key=evaluation, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)\n", " blessing: OutputChannel(artifact_type=ModelBlessing, producer_component_id=Evaluator, output_key=blessing, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Use TFMA to compute a evaluation statistics over features of a model and\n", "# validate them against a baseline.\n", "\n", "# The model resolver is only required if performing model validation in addition\n", "# to evaluation. In this case we validate against the latest blessed model. If\n", "# no model has been blessed before (as in this case) the evaluator will make our\n", "# candidate the first blessed model.\n", "model_resolver = tfx.dsl.Resolver(\n", " strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,\n", " model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),\n", " model_blessing=tfx.dsl.Channel(\n", " type=tfx.types.standard_artifacts.ModelBlessing)).with_id(\n", " 'latest_blessed_model_resolver')\n", "context.run(model_resolver)\n", "\n", "evaluator = tfx.components.Evaluator(\n", " examples=example_gen.outputs['examples'],\n", " model=trainer.outputs['model'],\n", " eval_config=eval_config)\n", "context.run(evaluator)" ] }, { "cell_type": "markdown", "metadata": { "id": "AeCVkBusS_8g" }, "source": [ "Now let's examine the output artifacts of `Evaluator`." ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.067202Z", "iopub.status.busy": "2024-08-02T09:22:16.066640Z", "iopub.status.idle": "2024-08-02T09:22:16.071277Z", "shell.execute_reply": "2024-08-02T09:22:16.070693Z" }, "id": "k4GghePOTJxL" }, "outputs": [ { "data": { "text/plain": [ "{'evaluation': OutputChannel(artifact_type=ModelEvaluation, producer_component_id=Evaluator, output_key=evaluation, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False),\n", " 'blessing': OutputChannel(artifact_type=ModelBlessing, producer_component_id=Evaluator, output_key=blessing, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)}" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "evaluator.outputs" ] }, { "cell_type": "markdown", "metadata": { "id": "Y5TMskWe9LL0" }, "source": [ "Using the `evaluation` output we can show the default visualization of global metrics on the entire evaluation set." ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.074705Z", "iopub.status.busy": "2024-08-02T09:22:16.074247Z", "iopub.status.idle": "2024-08-02T09:22:16.090151Z", "shell.execute_reply": "2024-08-02T09:22:16.089535Z" }, "id": "U729j5X5QQUQ" }, "outputs": [ { "data": { "text/html": [ "Artifact at /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/evaluation/8

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a3f5ddcf666c42a89b72f62f47e9e65c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'Overall', 'metrics':…" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "context.show(evaluator.outputs['evaluation'])" ] }, { "cell_type": "markdown", "metadata": { "id": "t-tI4p6m-OAn" }, "source": [ "To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library." ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.094480Z", "iopub.status.busy": "2024-08-02T09:22:16.093961Z", "iopub.status.idle": "2024-08-02T09:22:16.109754Z", "shell.execute_reply": "2024-08-02T09:22:16.109188Z" }, "id": "pyis6iy0HLdi" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "2152fd3704f84400a9b8426b1ad6f4a0", "version_major": 2, "version_minor": 0 }, "text/plain": [ "SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_hour:19',…" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "import tensorflow_model_analysis as tfma\n", "\n", "# Get the TFMA output result path and load the result.\n", "PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\n", "tfma_result = tfma.load_eval_result(PATH_TO_RESULT)\n", "\n", "# Show data sliced along feature column trip_start_hour.\n", "tfma.view.render_slicing_metrics(\n", " tfma_result, slicing_column='trip_start_hour')" ] }, { "cell_type": "markdown", "metadata": { "id": "7uvYrUf2-r_6" }, "source": [ "This visualization shows the same metrics, but computed at every feature value of `trip_start_hour` instead of on the entire evaluation set.\n", "\n", "TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see [the tutorial](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic)." ] }, { "cell_type": "markdown", "metadata": { "id": "TEotnkxEswUb" }, "source": [ "Since we added thresholds to our config, validation output is also available. The precence of a `blessing` artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed." ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.114545Z", "iopub.status.busy": "2024-08-02T09:22:16.114094Z", "iopub.status.idle": "2024-08-02T09:22:16.312960Z", "shell.execute_reply": "2024-08-02T09:22:16.312084Z" }, "id": "FZmiRtg6TKtR" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "total 0\r\n", "-rw-rw-r-- 1 kbuilder kbuilder 0 Aug 2 09:22 BLESSED\r\n" ] } ], "source": [ "blessing_uri = evaluator.outputs['blessing'].get()[0].uri\n", "!ls -l {blessing_uri}" ] }, { "cell_type": "markdown", "metadata": { "id": "hM1tFkOVSBa0" }, "source": [ "Now can also verify the success by loading the validation result record:" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.317239Z", "iopub.status.busy": "2024-08-02T09:22:16.316957Z", "iopub.status.idle": "2024-08-02T09:22:16.323902Z", "shell.execute_reply": "2024-08-02T09:22:16.323105Z" }, "id": "lxa5G08bSJ8a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "validation_ok: true\n", "validation_details {\n", " slicing_details {\n", " slicing_spec {\n", " }\n", " num_matching_slices: 25\n", " }\n", "}\n", "\n" ] } ], "source": [ "PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri\n", "print(tfma.load_validation_result(PATH_TO_RESULT))" ] }, { "cell_type": "markdown", "metadata": { "id": "T8DYekCZlHfj" }, "source": [ "### Pusher\n", "The `Pusher` component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to `_serving_model_dir`." ] }, { "cell_type": "code", "execution_count": 37, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.327583Z", "iopub.status.busy": "2024-08-02T09:22:16.326986Z", "iopub.status.idle": "2024-08-02T09:22:16.386788Z", "shell.execute_reply": "2024-08-02T09:22:16.386156Z" }, "id": "r45nQ69eikc9" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running driver for Pusher\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running executor for Pusher\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Model version: 1722590536\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Model written to serving path /tmpfs/tmp/tmp72buj3co/serving_model/taxi_simple/1722590536.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Model pushed to /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Pusher/pushed_model/9.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Running publisher for Pusher\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:MetadataStore with DB connection initialized\n" ] }, { "data": { "text/html": [ "\n", "\n", "
ExecutionResult at 0x7fe85fd6cdf0
.execution_id9
.component\n", "\n", "
Pusher at 0x7fe6583f8220
.inputs
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
['model_blessing']\n", "\n", "
Channel of type 'ModelBlessing' (1 artifact) at 0x7fe6584e3ca0
.type_nameModelBlessing
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelBlessing' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8) at 0x7fe664222460
.type<class 'tfx.types.standard_artifacts.ModelBlessing'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8
.outputs
['pushed_model']\n", "\n", "
Channel of type 'PushedModel' (1 artifact) at 0x7fe6581757c0
.type_namePushedModel
._artifacts
[0]\n", "\n", "
Artifact of type 'PushedModel' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Pusher/pushed_model/9) at 0x7fe664251a00
.type<class 'tfx.types.standard_artifacts.PushedModel'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Pusher/pushed_model/9
.exec_properties
['push_destination']{\n", " "filesystem": {\n", " "base_directory": "/tmpfs/tmp/tmp72buj3co/serving_model/taxi_simple"\n", " }\n", "}
['custom_config']null
.component.inputs
['model']\n", "\n", "
Channel of type 'Model' (1 artifact) at 0x7fe6583df3d0
.type_nameModel
._artifacts
[0]\n", "\n", "
Artifact of type 'Model' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6) at 0x7fe710271d00
.type<class 'tfx.types.standard_artifacts.Model'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Trainer/model/6
['model_blessing']\n", "\n", "
Channel of type 'ModelBlessing' (1 artifact) at 0x7fe6584e3ca0
.type_nameModelBlessing
._artifacts
[0]\n", "\n", "
Artifact of type 'ModelBlessing' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8) at 0x7fe664222460
.type<class 'tfx.types.standard_artifacts.ModelBlessing'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Evaluator/blessing/8
.component.outputs
['pushed_model']\n", "\n", "
Channel of type 'PushedModel' (1 artifact) at 0x7fe6581757c0
.type_namePushedModel
._artifacts
[0]\n", "\n", "
Artifact of type 'PushedModel' (uri: /tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Pusher/pushed_model/9) at 0x7fe664251a00
.type<class 'tfx.types.standard_artifacts.PushedModel'>
.uri/tmpfs/tmp/tfx-interactive-2024-08-02T09_18_50.903833-rtz7tiiy/Pusher/pushed_model/9
" ], "text/plain": [ "ExecutionResult(\n", " component_id: Pusher\n", " execution_id: 9\n", " outputs:\n", " pushed_model: OutputChannel(artifact_type=PushedModel, producer_component_id=Pusher, output_key=pushed_model, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False))" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pusher = tfx.components.Pusher(\n", " model=trainer.outputs['model'],\n", " model_blessing=evaluator.outputs['blessing'],\n", " push_destination=tfx.proto.PushDestination(\n", " filesystem=tfx.proto.PushDestination.Filesystem(\n", " base_directory=_serving_model_dir)))\n", "context.run(pusher)" ] }, { "cell_type": "markdown", "metadata": { "id": "ctUErBYoTO9I" }, "source": [ "Let's examine the output artifacts of `Pusher`." ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.390485Z", "iopub.status.busy": "2024-08-02T09:22:16.389982Z", "iopub.status.idle": "2024-08-02T09:22:16.394506Z", "shell.execute_reply": "2024-08-02T09:22:16.393877Z" }, "id": "pRkWo-MzTSss" }, "outputs": [ { "data": { "text/plain": [ "{'pushed_model': OutputChannel(artifact_type=PushedModel, producer_component_id=Pusher, output_key=pushed_model, additional_properties={}, additional_custom_properties={}, _input_trigger=None, _is_async=False)}" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pusher.outputs" ] }, { "cell_type": "markdown", "metadata": { "id": "peH2PPS3VgkL" }, "source": [ "In particular, the Pusher will export your model in the SavedModel format, which looks like this:" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "execution": { "iopub.execute_input": "2024-08-02T09:22:16.397617Z", "iopub.status.busy": "2024-08-02T09:22:16.397367Z", "iopub.status.idle": "2024-08-02T09:22:19.641761Z", "shell.execute_reply": "2024-08-02T09:22:19.641106Z" }, "id": "4zyIqWl9TSdG" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:Fingerprint not found. Saved model loading will continue.\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "INFO:absl:path_and_singleprint metric could not be logged. Saved model loading will continue.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "('predict',\n", " Dict[['all_class_ids', TensorSpec(shape=(None, 2), dtype=tf.int32, name=None)], ['class_ids', TensorSpec(shape=(None, 1), dtype=tf.int64, name=None)], ['logistic', TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)], ['probabilities', TensorSpec(shape=(None, 2), dtype=tf.float32, name=None)], ['all_classes', TensorSpec(shape=(None, 2), dtype=tf.string, name=None)], ['classes', TensorSpec(shape=(None, 1), dtype=tf.string, name=None)], ['logits', TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)]] at 0x7FE85FC74F10>)\n", "('regression',\n", " Dict[['outputs', TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)]] at 0x7FE7205F5B20>)\n", "('classification',\n", " Dict[['scores', TensorSpec(shape=(None, 2), dtype=tf.float32, name=None)], ['classes', TensorSpec(shape=(None, 2), dtype=tf.string, name=None)]] at 0x7FE738146C10>)\n", "('serving_default',\n", " Dict[['classes', TensorSpec(shape=(None, 2), dtype=tf.string, name=None)], ['scores', TensorSpec(shape=(None, 2), dtype=tf.float32, name=None)]] at 0x7FE6BC5E6FD0>)\n" ] } ], "source": [ "push_uri = pusher.outputs['pushed_model'].get()[0].uri\n", "model = tf.saved_model.load(push_uri)\n", "\n", "for item in model.signatures.items():\n", " pp.pprint(item)" ] }, { "cell_type": "markdown", "metadata": { "id": "3-YPNUuHANtj" }, "source": [ "We're finished our tour of built-in TFX components!" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [ "wdeKOEkv1Fe8" ], "name": "components.ipynb", "private_outputs": true, "provenance": [], "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.19" }, "widgets": { "application/vnd.jupyter.widget-state+json": { "state": { "2152fd3704f84400a9b8426b1ad6f4a0": { "model_module": "tensorflow_model_analysis", "model_module_version": "0.46.0", "model_name": "SlicingMetricsModel", "state": { "_dom_classes": [], "_model_module": "tensorflow_model_analysis", "_model_module_version": "0.46.0", "_model_name": "SlicingMetricsModel", "_view_count": null, "_view_module": "tensorflow_model_analysis", "_view_module_version": "0.46.0", "_view_name": "SlicingMetricsView", "config": { "weightedExamplesColumn": "example_count" }, "data": [ { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7678018808364868 }, "accuracy_baseline": { "doubleValue": 0.7399380803108215 }, "auc": { "doubleValue": 0.9429666996002197 }, "auc_precision_recall": { "doubleValue": 0.7726441621780396 }, "average_loss": { "doubleValue": 0.3591391146183014 }, "example_count": { "doubleValue": 323.0 }, "label/mean": { "doubleValue": 0.26006191968917847 }, "post_export_metrics/example_count": { "doubleValue": 323.0 }, "precision": { "doubleValue": 0.800000011920929 }, "prediction/mean": { "doubleValue": 0.2386016547679901 }, "recall": { "doubleValue": 0.1428571492433548 } } } }, "slice": "trip_start_hour:19" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7640449404716492 }, "accuracy_baseline": { "doubleValue": 0.7584269642829895 }, "auc": { "doubleValue": 0.9453918933868408 }, "auc_precision_recall": { "doubleValue": 0.74111008644104 }, "average_loss": { "doubleValue": 0.3549502491950989 }, "example_count": { "doubleValue": 178.0 }, "label/mean": { "doubleValue": 0.2415730357170105 }, "post_export_metrics/example_count": { "doubleValue": 178.0 }, "precision": { "doubleValue": 0.6000000238418579 }, "prediction/mean": { "doubleValue": 0.22801615297794342 }, "recall": { "doubleValue": 0.06976744532585144 } } } }, "slice": "trip_start_hour:1" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8545454740524292 }, "accuracy_baseline": { "doubleValue": 0.8545454740524292 }, "auc": { "doubleValue": 0.9371675848960876 }, "auc_precision_recall": { "doubleValue": 0.5833646059036255 }, "average_loss": { "doubleValue": 0.29158324003219604 }, "example_count": { "doubleValue": 110.0 }, "label/mean": { "doubleValue": 0.145454540848732 }, "post_export_metrics/example_count": { "doubleValue": 110.0 }, "precision": { "doubleValue": 0.5 }, "prediction/mean": { "doubleValue": 0.20872336626052856 }, "recall": { "doubleValue": 0.0625 } } } }, "slice": "trip_start_hour:7" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8262910842895508 }, "accuracy_baseline": { "doubleValue": 0.7887324094772339 }, "auc": { "doubleValue": 0.9498677253723145 }, "auc_precision_recall": { "doubleValue": 0.7769178152084351 }, "average_loss": { "doubleValue": 0.32382532954216003 }, "example_count": { "doubleValue": 213.0 }, "label/mean": { "doubleValue": 0.2112676054239273 }, "post_export_metrics/example_count": { "doubleValue": 213.0 }, "precision": { "doubleValue": 0.75 }, "prediction/mean": { "doubleValue": 0.24045692384243011 }, "recall": { "doubleValue": 0.2666666805744171 } } } }, "slice": "trip_start_hour:10" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7914692163467407 }, "accuracy_baseline": { "doubleValue": 0.8009478449821472 }, "auc": { "doubleValue": 0.9348407983779907 }, "auc_precision_recall": { "doubleValue": 0.6406158804893494 }, "average_loss": { "doubleValue": 0.33916714787483215 }, "example_count": { "doubleValue": 211.0 }, "label/mean": { "doubleValue": 0.1990521401166916 }, "post_export_metrics/example_count": { "doubleValue": 211.0 }, "precision": { "doubleValue": 0.3333333432674408 }, "prediction/mean": { "doubleValue": 0.23240745067596436 }, "recall": { "doubleValue": 0.0476190485060215 } } } }, "slice": "trip_start_hour:9" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.762135922908783 }, "accuracy_baseline": { "doubleValue": 0.7427184581756592 }, "auc": { "doubleValue": 0.9643605947494507 }, "auc_precision_recall": { "doubleValue": 0.8129278421401978 }, "average_loss": { "doubleValue": 0.34829050302505493 }, "example_count": { "doubleValue": 206.0 }, "label/mean": { "doubleValue": 0.2572815418243408 }, "post_export_metrics/example_count": { "doubleValue": 206.0 }, "precision": { "doubleValue": 0.75 }, "prediction/mean": { "doubleValue": 0.2179378718137741 }, "recall": { "doubleValue": 0.11320754885673523 } } } }, "slice": "trip_start_hour:0" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.800000011920929 }, "accuracy_baseline": { "doubleValue": 0.8068965673446655 }, "auc": { "doubleValue": 0.9253662824630737 }, "auc_precision_recall": { "doubleValue": 0.578473687171936 }, "average_loss": { "doubleValue": 0.3210127353668213 }, "example_count": { "doubleValue": 145.0 }, "label/mean": { "doubleValue": 0.19310344755649567 }, "post_export_metrics/example_count": { "doubleValue": 145.0 }, "precision": { "doubleValue": 0.0 }, "prediction/mean": { "doubleValue": 0.20404012501239777 }, "recall": { "doubleValue": 0.0 } } } }, "slice": "trip_start_hour:2" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8199999928474426 }, "accuracy_baseline": { "doubleValue": 0.8119999766349792 }, "auc": { "doubleValue": 0.9375327825546265 }, "auc_precision_recall": { "doubleValue": 0.6602307558059692 }, "average_loss": { "doubleValue": 0.3108265995979309 }, "example_count": { "doubleValue": 250.0 }, "label/mean": { "doubleValue": 0.18799999356269836 }, "post_export_metrics/example_count": { "doubleValue": 250.0 }, "precision": { "doubleValue": 0.6000000238418579 }, "prediction/mean": { "doubleValue": 0.21337896585464478 }, "recall": { "doubleValue": 0.12765957415103912 } } } }, "slice": "trip_start_hour:15" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7786885499954224 }, "accuracy_baseline": { "doubleValue": 0.7745901346206665 }, "auc": { "doubleValue": 0.933814287185669 }, "auc_precision_recall": { "doubleValue": 0.6979961395263672 }, "average_loss": { "doubleValue": 0.3364473581314087 }, "example_count": { "doubleValue": 244.0 }, "label/mean": { "doubleValue": 0.2254098355770111 }, "post_export_metrics/example_count": { "doubleValue": 244.0 }, "precision": { "doubleValue": 0.5384615659713745 }, "prediction/mean": { "doubleValue": 0.23163820803165436 }, "recall": { "doubleValue": 0.12727272510528564 } } } }, "slice": "trip_start_hour:12" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7943925261497498 }, "accuracy_baseline": { "doubleValue": 0.7663551568984985 }, "auc": { "doubleValue": 0.9364634156227112 }, "auc_precision_recall": { "doubleValue": 0.7095399498939514 }, "average_loss": { "doubleValue": 0.3383032977581024 }, "example_count": { "doubleValue": 214.0 }, "label/mean": { "doubleValue": 0.23364485800266266 }, "post_export_metrics/example_count": { "doubleValue": 214.0 }, "precision": { "doubleValue": 0.7142857313156128 }, "prediction/mean": { "doubleValue": 0.2368907779455185 }, "recall": { "doubleValue": 0.20000000298023224 } } } }, "slice": "trip_start_hour:11" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7748344540596008 }, "accuracy_baseline": { "doubleValue": 0.7615894079208374 }, "auc": { "doubleValue": 0.913466215133667 }, "auc_precision_recall": { "doubleValue": 0.6374205350875854 }, "average_loss": { "doubleValue": 0.359292209148407 }, "example_count": { "doubleValue": 302.0 }, "label/mean": { "doubleValue": 0.2384105920791626 }, "post_export_metrics/example_count": { "doubleValue": 302.0 }, "precision": { "doubleValue": 0.6666666865348816 }, "prediction/mean": { "doubleValue": 0.23716086149215698 }, "recall": { "doubleValue": 0.1111111119389534 } } } }, "slice": "trip_start_hour:20" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.782608687877655 }, "accuracy_baseline": { "doubleValue": 0.7463768124580383 }, "auc": { "doubleValue": 0.9551317691802979 }, "auc_precision_recall": { "doubleValue": 0.803945004940033 }, "average_loss": { "doubleValue": 0.3485701084136963 }, "example_count": { "doubleValue": 276.0 }, "label/mean": { "doubleValue": 0.25362318754196167 }, "post_export_metrics/example_count": { "doubleValue": 276.0 }, "precision": { "doubleValue": 0.8125 }, "prediction/mean": { "doubleValue": 0.2301473170518875 }, "recall": { "doubleValue": 0.18571428954601288 } } } }, "slice": "trip_start_hour:22" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7626459002494812 }, "accuracy_baseline": { "doubleValue": 0.7431906461715698 }, "auc": { "doubleValue": 0.9388386607170105 }, "auc_precision_recall": { "doubleValue": 0.8100572228431702 }, "average_loss": { "doubleValue": 0.36551401019096375 }, "example_count": { "doubleValue": 257.0 }, "label/mean": { "doubleValue": 0.2568093240261078 }, "post_export_metrics/example_count": { "doubleValue": 257.0 }, "precision": { "doubleValue": 0.7777777910232544 }, "prediction/mean": { "doubleValue": 0.23129314184188843 }, "recall": { "doubleValue": 0.10606060922145844 } } } }, "slice": "trip_start_hour:17" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8279221057891846 }, "accuracy_baseline": { "doubleValue": 0.7889610528945923 }, "auc": { "doubleValue": 0.9559037685394287 }, "auc_precision_recall": { "doubleValue": 0.7944872379302979 }, "average_loss": { "doubleValue": 0.3145333528518677 }, "example_count": { "doubleValue": 308.0 }, "label/mean": { "doubleValue": 0.2110389620065689 }, "post_export_metrics/example_count": { "doubleValue": 308.0 }, "precision": { "doubleValue": 0.875 }, "prediction/mean": { "doubleValue": 0.2206205576658249 }, "recall": { "doubleValue": 0.2153846174478531 } } } }, "slice": "trip_start_hour:21" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.80859375 }, "accuracy_baseline": { "doubleValue": 0.78125 }, "auc": { "doubleValue": 0.9237053394317627 }, "auc_precision_recall": { "doubleValue": 0.6556787490844727 }, "average_loss": { "doubleValue": 0.3409092426300049 }, "example_count": { "doubleValue": 256.0 }, "label/mean": { "doubleValue": 0.21875 }, "post_export_metrics/example_count": { "doubleValue": 256.0 }, "precision": { "doubleValue": 0.7333333492279053 }, "prediction/mean": { "doubleValue": 0.23624303936958313 }, "recall": { "doubleValue": 0.1964285671710968 } } } }, "slice": "trip_start_hour:13" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8008298873901367 }, "accuracy_baseline": { "doubleValue": 0.771784245967865 }, "auc": { "doubleValue": 0.9447702765464783 }, "auc_precision_recall": { "doubleValue": 0.7813003659248352 }, "average_loss": { "doubleValue": 0.33834949135780334 }, "example_count": { "doubleValue": 241.0 }, "label/mean": { "doubleValue": 0.2282157689332962 }, "post_export_metrics/example_count": { "doubleValue": 241.0 }, "precision": { "doubleValue": 0.8888888955116272 }, "prediction/mean": { "doubleValue": 0.21412284672260284 }, "recall": { "doubleValue": 0.145454540848732 } } } }, "slice": "trip_start_hour:23" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7980132699012756 }, "accuracy_baseline": { "doubleValue": 0.7649006843566895 }, "auc": { "doubleValue": 0.9262239933013916 }, "auc_precision_recall": { "doubleValue": 0.7286790609359741 }, "average_loss": { "doubleValue": 0.3576897382736206 }, "example_count": { "doubleValue": 302.0 }, "label/mean": { "doubleValue": 0.23509933054447174 }, "post_export_metrics/example_count": { "doubleValue": 302.0 }, "precision": { "doubleValue": 0.8571428656578064 }, "prediction/mean": { "doubleValue": 0.23986920714378357 }, "recall": { "doubleValue": 0.1690140813589096 } } } }, "slice": "trip_start_hour:18" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7838982939720154 }, "accuracy_baseline": { "doubleValue": 0.7627118825912476 }, "auc": { "doubleValue": 0.9232638478279114 }, "auc_precision_recall": { "doubleValue": 0.6617361903190613 }, "average_loss": { "doubleValue": 0.36469942331314087 }, "example_count": { "doubleValue": 236.0 }, "label/mean": { "doubleValue": 0.23728813230991364 }, "post_export_metrics/example_count": { "doubleValue": 236.0 }, "precision": { "doubleValue": 0.692307710647583 }, "prediction/mean": { "doubleValue": 0.24736666679382324 }, "recall": { "doubleValue": 0.1607142835855484 } } } }, "slice": "trip_start_hour:14" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7400000095367432 }, "accuracy_baseline": { "doubleValue": 0.7599999904632568 }, "auc": { "doubleValue": 0.8711622953414917 }, "auc_precision_recall": { "doubleValue": 0.5334897041320801 }, "average_loss": { "doubleValue": 0.40110501646995544 }, "example_count": { "doubleValue": 100.0 }, "label/mean": { "doubleValue": 0.23999999463558197 }, "post_export_metrics/example_count": { "doubleValue": 100.0 }, "precision": { "doubleValue": 0.0 }, "prediction/mean": { "doubleValue": 0.2197677195072174 }, "recall": { "doubleValue": 0.0 } } } }, "slice": "trip_start_hour:3" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.770588219165802 }, "accuracy_baseline": { "doubleValue": 0.7647058963775635 }, "auc": { "doubleValue": 0.9263461232185364 }, "auc_precision_recall": { "doubleValue": 0.6987344026565552 }, "average_loss": { "doubleValue": 0.3665175437927246 }, "example_count": { "doubleValue": 170.0 }, "label/mean": { "doubleValue": 0.23529411852359772 }, "post_export_metrics/example_count": { "doubleValue": 170.0 }, "precision": { "doubleValue": 0.6000000238418579 }, "prediction/mean": { "doubleValue": 0.24442484974861145 }, "recall": { "doubleValue": 0.07500000298023224 } } } }, "slice": "trip_start_hour:8" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.8225806355476379 }, "accuracy_baseline": { "doubleValue": 0.8225806355476379 }, "auc": { "doubleValue": 0.9331551194190979 }, "auc_precision_recall": { "doubleValue": 0.6487808227539062 }, "average_loss": { "doubleValue": 0.3097984790802002 }, "example_count": { "doubleValue": 62.0 }, "label/mean": { "doubleValue": 0.17741934955120087 }, "post_export_metrics/example_count": { "doubleValue": 62.0 }, "precision": { "doubleValue": 0.0 }, "prediction/mean": { "doubleValue": 0.1873946189880371 }, "recall": { "doubleValue": 0.0 } } } }, "slice": "trip_start_hour:4" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7580645084381104 }, "accuracy_baseline": { "doubleValue": 0.7661290168762207 }, "auc": { "doubleValue": 0.9120236039161682 }, "auc_precision_recall": { "doubleValue": 0.6216146945953369 }, "average_loss": { "doubleValue": 0.36124491691589355 }, "example_count": { "doubleValue": 248.0 }, "label/mean": { "doubleValue": 0.2338709682226181 }, "post_export_metrics/example_count": { "doubleValue": 248.0 }, "precision": { "doubleValue": 0.4166666567325592 }, "prediction/mean": { "doubleValue": 0.2471160590648651 }, "recall": { "doubleValue": 0.08620689809322357 } } } }, "slice": "trip_start_hour:16" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7358490824699402 }, "accuracy_baseline": { "doubleValue": 0.7547169923782349 }, "auc": { "doubleValue": 0.8625000715255737 }, "auc_precision_recall": { "doubleValue": 0.49856889247894287 }, "average_loss": { "doubleValue": 0.3808097541332245 }, "example_count": { "doubleValue": 53.0 }, "label/mean": { "doubleValue": 0.24528302252292633 }, "post_export_metrics/example_count": { "doubleValue": 53.0 }, "precision": { "doubleValue": 0.3333333432674408 }, "prediction/mean": { "doubleValue": 0.25673845410346985 }, "recall": { "doubleValue": 0.07692307978868484 } } } }, "slice": "trip_start_hour:5" }, { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7377049326896667 }, "accuracy_baseline": { "doubleValue": 0.7377049326896667 }, "auc": { "doubleValue": 0.9506944417953491 }, "auc_precision_recall": { "doubleValue": 0.7436209321022034 }, "average_loss": { "doubleValue": 0.3651716411113739 }, "example_count": { "doubleValue": 61.0 }, "label/mean": { "doubleValue": 0.26229506731033325 }, "post_export_metrics/example_count": { "doubleValue": 61.0 }, "precision": { "doubleValue": 0.0 }, "prediction/mean": { "doubleValue": 0.22350376844406128 }, "recall": { "doubleValue": 0.0 } } } }, "slice": "trip_start_hour:6" } ], "js_events": [], "layout": "IPY_MODEL_26c5e5d1a43046e5b2ce4390c5ea7bf7" } }, "26c5e5d1a43046e5b2ce4390c5ea7bf7": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } }, "a3f5ddcf666c42a89b72f62f47e9e65c": { "model_module": "tensorflow_model_analysis", "model_module_version": "0.46.0", "model_name": "SlicingMetricsModel", "state": { "_dom_classes": [], "_model_module": "tensorflow_model_analysis", "_model_module_version": "0.46.0", "_model_name": "SlicingMetricsModel", "_view_count": null, "_view_module": "tensorflow_model_analysis", "_view_module_version": "0.46.0", "_view_name": "SlicingMetricsView", "config": { "weightedExamplesColumn": "example_count" }, "data": [ { "metrics": { "": { "": { "accuracy": { "doubleValue": 0.7881594896316528 }, "accuracy_baseline": { "doubleValue": 0.771244466304779 }, "auc": { "doubleValue": 0.9331230521202087 }, "auc_precision_recall": { "doubleValue": 0.7030625939369202 }, "average_loss": { "doubleValue": 0.345225989818573 }, "example_count": { "doubleValue": 4966.0 }, "label/mean": { "doubleValue": 0.22875553369522095 }, "post_export_metrics/example_count": { "doubleValue": 4966.0 }, "precision": { "doubleValue": 0.6944444179534912 }, "prediction/mean": { "doubleValue": 0.23057205975055695 }, "recall": { "doubleValue": 0.13204225897789001 } } } }, "slice": "Overall" } ], "js_events": [], "layout": "IPY_MODEL_b64aec5209944087bedbe0876dd8a2ef" } }, "b64aec5209944087bedbe0876dd8a2ef": { "model_module": "@jupyter-widgets/base", "model_module_version": "1.2.0", "model_name": "LayoutModel", "state": { "_model_module": "@jupyter-widgets/base", "_model_module_version": "1.2.0", "_model_name": "LayoutModel", "_view_count": null, "_view_module": "@jupyter-widgets/base", "_view_module_version": "1.2.0", "_view_name": "LayoutView", "align_content": null, "align_items": null, "align_self": null, "border": null, "bottom": null, "display": null, "flex": null, "flex_flow": null, "grid_area": null, "grid_auto_columns": null, "grid_auto_flow": null, "grid_auto_rows": null, "grid_column": null, "grid_gap": null, "grid_row": null, "grid_template_areas": null, "grid_template_columns": null, "grid_template_rows": null, "height": null, "justify_content": null, "justify_items": null, "left": null, "margin": null, "max_height": null, "max_width": null, "min_height": null, "min_width": null, "object_fit": null, "object_position": null, "order": null, "overflow": null, "overflow_x": null, "overflow_y": null, "padding": null, "right": null, "top": null, "visibility": null, "width": null } } }, "version_major": 2, "version_minor": 0 } } }, "nbformat": 4, "nbformat_minor": 0 }