{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "c8Cx-rUMVX25" }, "source": [ "##### Copyright 2020 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "I9sUhVL_VZNO" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "6Y8E0lw5eYWm" }, "source": [ "# 使用 int16 激活值进行训练后整数量化" ] }, { "cell_type": "markdown", "metadata": { "id": "CGuqeuPSVNo-" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org上查看在 Google Colab 中运行 在 GitHub 上查看源代码 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "BTC1rDAuei_1" }, "source": [ "## 概述\n", "\n", "现在,将模型从 TensorFlow 转换为 TensorFlow Lite 的 FlatBuffer 格式时,[TensorFlow Lite](https://tensorflow.google.cn/lite/) 支持将激活转换为 16 位整数值,同时将权重转换为 8 位整数值。我们将此模式称为“16x8 量化模式”。当激活对量化敏感时,此模式可以大幅提高量化模型的准确率,同时还可以将模型大小缩减四分之一至四分之三。此外,这种完全量化的模型可供仅支持整数的硬件加速器使用。\n", "\n", "一些可以从这种训练后量化模式受益的示例模型包括:\n", "\n", "- 超分辨率,\n", "- 音频信号处理,如噪声消除和波束成形,\n", "- 图像降噪,\n", "- 基于单张图像的 HDR 重建\n", "\n", "在本教程中,您将从头开始训练一个 MNIST 模型,并在 TensorFlow 中检查其准确率,然后使用此模式将该模型转换为 Tensorflow Lite FlatBuffer。最后,您将检查转换的模型的准确率,并将其与原始 float32 模型进行对比。请注意,本示例旨在演示此模式的用法,并不会展现与 TensorFlow Lite 中提供的其他量化技术相比的优势。" ] }, { "cell_type": "markdown", "metadata": { "id": "2XsEP17Zelz9" }, "source": [ "## 构建 MNIST 模型" ] }, { "cell_type": "markdown", "metadata": { "id": "dDqqUIZjZjac" }, "source": [ "### 设置" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "gyqAw1M9lyab" }, "outputs": [], "source": [ "import logging\n", "logging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n", "\n", "import tensorflow as tf\n", "from tensorflow import keras\n", "import numpy as np\n", "import pathlib" ] }, { "cell_type": "markdown", "metadata": { "id": "srTSFKjn1tMp" }, "source": [ "检查 16x8 量化模式是否可用 " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "c6nb7OPlXs_3" }, "outputs": [], "source": [ "tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8" ] }, { "cell_type": "markdown", "metadata": { "id": "eQ6Q0qqKZogR" }, "source": [ "### 训练并导出模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hWSAjQWagIHl" }, "outputs": [], "source": [ "# Load MNIST dataset\n", "mnist = keras.datasets.mnist\n", "(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n", "\n", "# Normalize the input image so that each pixel value is between 0 to 1.\n", "train_images = train_images / 255.0\n", "test_images = test_images / 255.0\n", "\n", "# Define the model architecture\n", "model = keras.Sequential([\n", " keras.layers.InputLayer(input_shape=(28, 28)),\n", " keras.layers.Reshape(target_shape=(28, 28, 1)),\n", " keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n", " keras.layers.MaxPooling2D(pool_size=(2, 2)),\n", " keras.layers.Flatten(),\n", " keras.layers.Dense(10)\n", "])\n", "\n", "# Train the digit classification model\n", "model.compile(optimizer='adam',\n", " loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n", " metrics=['accuracy'])\n", "model.fit(\n", " train_images,\n", " train_labels,\n", " epochs=1,\n", " validation_data=(test_images, test_labels)\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "5NMaNZQCkW9X" }, "source": [ "在此示例中,您只对模型进行了一个周期的训练,因此只训练到约 96% 的准确率。" ] }, { "cell_type": "markdown", "metadata": { "id": "xl8_fzVAZwOh" }, "source": [ "### 转换为 TensorFlow Lite 模型\n", "\n", "现在,您可以使用 TensorFlow Lite [Converter](https://tensorflow.google.cn/lite/models/convert) 将训练后的模型转换为 TensorFlow Lite 模型。\n", "\n", "现在,使用 `TFliteConverter` 将模型转换为默认的 float32 格式:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_i8B2nDZmAgQ" }, "outputs": [], "source": [ "converter = tf.lite.TFLiteConverter.from_keras_model(model)\n", "tflite_model = converter.convert()" ] }, { "cell_type": "markdown", "metadata": { "id": "F2o2ZfF0aiCx" }, "source": [ "将其写入 `.tflite` 文件:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vptWZq2xnclo" }, "outputs": [], "source": [ "tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\n", "tflite_models_dir.mkdir(exist_ok=True, parents=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ie9pQaQrn5ue" }, "outputs": [], "source": [ "tflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\n", "tflite_model_file.write_bytes(tflite_model)" ] }, { "cell_type": "markdown", "metadata": { "id": "7BONhYtYocQY" }, "source": [ "要改为将模型量化为 16x8 量化模式,首先将 `optimizations` 标记设置为使用默认优化。然后将 16x8 量化模式指定为目标规范中要求的受支持运算:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HEZ6ET1AHAS3" }, "outputs": [], "source": [ "converter.optimizations = [tf.lite.Optimize.DEFAULT]\n", "converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]" ] }, { "cell_type": "markdown", "metadata": { "id": "zLxQwZq9CpN7" }, "source": [ "对于 int8 训练后量化,通过将转换器选项 `inference_input(output)_type` 设置为 tf.int16,可以产生全整数量化模型。" ] }, { "cell_type": "markdown", "metadata": { "id": "yZekFJC5-fOG" }, "source": [ "设置校准数据:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Y3a6XFqvHbYM" }, "outputs": [], "source": [ "mnist_train, _ = tf.keras.datasets.mnist.load_data()\n", "images = tf.cast(mnist_train[0], tf.float32) / 255.0\n", "mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1)\n", "def representative_data_gen():\n", " for input_value in mnist_ds.take(100):\n", " # Model has only one input so each data point has one element.\n", " yield [input_value]\n", "converter.representative_dataset = representative_data_gen" ] }, { "cell_type": "markdown", "metadata": { "id": "xW84iMYjHd9t" }, "source": [ "最后,像往常一样转换模型。请注意,为了方便调用,转换后的模型默认仍将使用浮点输入和输出。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yuNfl3CoHNK3" }, "outputs": [], "source": [ "tflite_16x8_model = converter.convert()\n", "tflite_model_16x8_file = tflite_models_dir/\"mnist_model_quant_16x8.tflite\"\n", "tflite_model_16x8_file.write_bytes(tflite_16x8_model)" ] }, { "cell_type": "markdown", "metadata": { "id": "PhMmUTl4sbkz" }, "source": [ "请注意,生成文件的大小约为原来的 `1/3`。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JExfcfLDscu4" }, "outputs": [], "source": [ "!ls -lh {tflite_models_dir}" ] }, { "cell_type": "markdown", "metadata": { "id": "L8lQHMp_asCq" }, "source": [ "## 运行 TensorFlow Lite 模型" ] }, { "cell_type": "markdown", "metadata": { "id": "-5l6-ciItvX6" }, "source": [ "使用 Python TensorFlow Lite 解释器运行 TensorFlow Lite 模型。" ] }, { "cell_type": "markdown", "metadata": { "id": "Ap_jE7QRvhPf" }, "source": [ "### 将模型加载到解释器中" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Jn16Rc23zTss" }, "outputs": [], "source": [ "interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\n", "interpreter.allocate_tensors()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "J8Pztk1mvNVL" }, "outputs": [], "source": [ "interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file))\n", "interpreter_16x8.allocate_tensors()" ] }, { "cell_type": "markdown", "metadata": { "id": "2opUt_JTdyEu" }, "source": [ "### 在单个图像上测试模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AKslvo2kwWac" }, "outputs": [], "source": [ "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n", "\n", "input_index = interpreter.get_input_details()[0][\"index\"]\n", "output_index = interpreter.get_output_details()[0][\"index\"]\n", "\n", "interpreter.set_tensor(input_index, test_image)\n", "interpreter.invoke()\n", "predictions = interpreter.get_tensor(output_index)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XZClM2vo3_bm" }, "outputs": [], "source": [ "import matplotlib.pylab as plt\n", "\n", "plt.imshow(test_images[0])\n", "template = \"True:{true}, predicted:{predict}\"\n", "_ = plt.title(template.format(true= str(test_labels[0]),\n", " predict=str(np.argmax(predictions[0]))))\n", "plt.grid(False)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3gwhv4lKbYZ4" }, "outputs": [], "source": [ "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n", "\n", "input_index = interpreter_16x8.get_input_details()[0][\"index\"]\n", "output_index = interpreter_16x8.get_output_details()[0][\"index\"]\n", "\n", "interpreter_16x8.set_tensor(input_index, test_image)\n", "interpreter_16x8.invoke()\n", "predictions = interpreter_16x8.get_tensor(output_index)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "CIH7G_MwbY2x" }, "outputs": [], "source": [ "plt.imshow(test_images[0])\n", "template = \"True:{true}, predicted:{predict}\"\n", "_ = plt.title(template.format(true= str(test_labels[0]),\n", " predict=str(np.argmax(predictions[0]))))\n", "plt.grid(False)" ] }, { "cell_type": "markdown", "metadata": { "id": "LwN7uIdCd8Gw" }, "source": [ "### 评估模型" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "05aeAuWjvjPx" }, "outputs": [], "source": [ "# A helper function to evaluate the TF Lite model using \"test\" dataset.\n", "def evaluate_model(interpreter):\n", " input_index = interpreter.get_input_details()[0][\"index\"]\n", " output_index = interpreter.get_output_details()[0][\"index\"]\n", "\n", " # Run predictions on every image in the \"test\" dataset.\n", " prediction_digits = []\n", " for test_image in test_images:\n", " # Pre-processing: add batch dimension and convert to float32 to match with\n", " # the model's input data format.\n", " test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n", " interpreter.set_tensor(input_index, test_image)\n", "\n", " # Run inference.\n", " interpreter.invoke()\n", "\n", " # Post-processing: remove batch dimension and find the digit with highest\n", " # probability.\n", " output = interpreter.tensor(output_index)\n", " digit = np.argmax(output()[0])\n", " prediction_digits.append(digit)\n", "\n", " # Compare prediction results with ground truth labels to calculate accuracy.\n", " accurate_count = 0\n", " for index in range(len(prediction_digits)):\n", " if prediction_digits[index] == test_labels[index]:\n", " accurate_count += 1\n", " accuracy = accurate_count * 1.0 / len(prediction_digits)\n", "\n", " return accuracy" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "T5mWkSbMcU5z" }, "outputs": [], "source": [ "print(evaluate_model(interpreter))" ] }, { "cell_type": "markdown", "metadata": { "id": "Km3cY9ry8ZlG" }, "source": [ "在 16x8 量化模型上重复评估:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-9cnwiPp6EGm" }, "outputs": [], "source": [ "# NOTE: This quantization mode is an experimental post-training mode,\n", "# it does not have any optimized kernels implementations or\n", "# specialized machine learning hardware accelerators. Therefore,\n", "# it could be slower than the float interpreter.\n", "print(evaluate_model(interpreter_16x8))" ] }, { "cell_type": "markdown", "metadata": { "id": "L7lfxkor8pgv" }, "source": [ "在此示例中,您已将模型量化为 16x8 模型,准确率没有任何差异,但文件大小只有原来的 1/3。\n" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "post_training_integer_quant_16x8.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }