{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Mq-riZs-TJGt" }, "source": [ "##### Copyright 2021 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "LEvnopDoTC4M" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "QSRG6qmtTRSk" }, "source": [ "# TensorFlow Lite Metadata Writer API\n" ] }, { "cell_type": "markdown", "metadata": { "id": "JlzjEt4Txr0x" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
在 TensorFlow.org 上查看\n", " 在 Google Colab 运行\n", " 在 GitHub 上查看源代码\n", " 下载笔记本
" ] }, { "cell_type": "markdown", "metadata": { "id": "b0gwEhfRYat6" }, "source": [ "[TensorFlow Lite Model Metadata](https://tensorflow.google.cn/lite/models/convert/metadata) 是标准的模型描述格式。它包含了丰富的通用模型信息、输入/输出和相关文件的语义,这使得模型具有自描述性和可交换性。\n", "\n", "Model Metadata 模型元数据目前用于以下两个主要用例:\n", "\n", "1. **使用 TensorFlow Lite [Task Library](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/overview) 和 [Codegen 工具](https://tensorflow.google.cn/lite/inference_with_metadata/codegen)启用简单模型推断**。Model Metadata 包含推断过程中必需的信息,如图像分类中的标签文件、音频分类中音频输入的采样率以及自然语言模型中用于处理输入字符串的标记器类型。\n", "\n", "2. **使模型创建者能够包括文档**,例如模型输入/输出的说明或模型使用方式。模型用户可以通过可视化工具(如 [Netron](https://netron.app/))查看这些文档。\n", "\n", "TensorFlow Lite Metadata Writer API 提供了一个易用的 API 来为 TFLite Task Library 支持的常用机器学习任务创建模型元数据。本笔记本展示了应如何为以下任务填充元数据的示例:\n", "\n", "- [图像分类器](#image_classifiers)\n", "- [目标检测器](#object_detectors)\n", "- [图像分割器](#image_segmenters)\n", "- [自然语言分类器](#nl_classifiers)\n", "- [音频分类器](#audio_classifiers)\n", "\n", "适用于 BERT 自然语言分类器和 BERT 问答器的元数据编写器即将推出。\n", "\n", "如果要为不受支持的用例添加元数据,请使用 [Flatbuffers Python API](https://tensorflow.google.cn/lite/models/convert/metadata#adding_metadata)。请参阅[此处](https://tensorflow.google.cn/lite/models/convert/metadata#adding_metadata)的教程。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "GVRIGdA4T6tO" }, "source": [ "## 先决条件" ] }, { "cell_type": "markdown", "metadata": { "id": "bVTD2KSyotBK" }, "source": [ "安装 TensorFlow Lite Support Pypi 软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "m-8xSrSvUg-6" }, "outputs": [], "source": [ "!pip install tflite-support-nightly" ] }, { "cell_type": "markdown", "metadata": { "id": "hyYS87Odpxef" }, "source": [ "## 为 Task Library 和 Codegen 创建模型元数据" ] }, { "cell_type": "markdown", "metadata": { "id": "uLxv541TqTim" }, "source": [ "\n", "\n", "### 图像分类器" ] }, { "cell_type": "markdown", "metadata": { "id": "s41TjCGlsyEF" }, "source": [ "有关支持的模型格式的更多详细信息,请参阅[图像分类器模型兼容性要求](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/image_classifier#model_compatibility_requirements)。" ] }, { "cell_type": "markdown", "metadata": { "id": "_KsPKmg8T9-8" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "hhgNqEtWrwB3" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import image_classifier\n", "from tflite_support.metadata_writers import writer_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "o9WBgiFdsiIQ" }, "source": [ "第 2 步:下载图像分类器示例,[mobilenet_v2_1.0_224.tflite](https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite) 和[标签文件](https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6WgSBbNet-Tt" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "ALtlz7woweHe" }, "source": [ "第 3 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_SMEBBt2r-W6" }, "outputs": [], "source": [ "ImageClassifierWriter = image_classifier.MetadataWriter\n", "_MODEL_PATH = \"mobilenet_v2_1.0_224.tflite\"\n", "# Task Library expects label files that are in the same format as the one below.\n", "_LABEL_FILE = \"mobilenet_labels.txt\"\n", "_SAVE_TO_PATH = \"mobilenet_v2_1.0_224_metadata.tflite\"\n", "# Normalization parameters is required when reprocessing the image. It is\n", "# optional if the image pixel values are in range of [0, 255] and the input\n", "# tensor is quantized to uint8. See the introduction for normalization and\n", "# quantization parameters below for more details.\n", "# https://tensorflow.google.cn/lite/models/convert/metadata#normalization_and_quantization_parameters)\n", "_INPUT_NORM_MEAN = 127.5\n", "_INPUT_NORM_STD = 127.5\n", "\n", "# Create the metadata writer.\n", "writer = ImageClassifierWriter.create_for_inference(\n", " writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n", " [_LABEL_FILE])\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "GhhTDkr-uf0n" }, "source": [ "\n", "\n", "### 目标检测器" ] }, { "cell_type": "markdown", "metadata": { "id": "EL9GssnTuf0n" }, "source": [ "有关支持的模型格式的更多详细信息,请参阅[目标检测器模型兼容性要求](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/object_detector#model_compatibility_requirements)。" ] }, { "cell_type": "markdown", "metadata": { "id": "r-HUTEtHuf0n" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2_NIROeouf0o" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import object_detector\n", "from tflite_support.metadata_writers import writer_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "UM6jijiUuf0o" }, "source": [ "第 2 步:下载示例目标检测器 [ssd_mobilenet_v1.tflite](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/ssd_mobilenet_v1.tflite) 和[标签文件](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/labelmap.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "4i_BBfGzuf0o" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/ssd_mobilenet_v1.tflite -o ssd_mobilenet_v1.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/object_detector/labelmap.txt -o ssd_mobilenet_labels.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "DG9T3eSDwsnd" }, "source": [ "第 3 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "vMGGeJfCuf0p" }, "outputs": [], "source": [ "ObjectDetectorWriter = object_detector.MetadataWriter\n", "_MODEL_PATH = \"ssd_mobilenet_v1.tflite\"\n", "# Task Library expects label files that are in the same format as the one below.\n", "_LABEL_FILE = \"ssd_mobilenet_labels.txt\"\n", "_SAVE_TO_PATH = \"ssd_mobilenet_v1_metadata.tflite\"\n", "# Normalization parameters is required when reprocessing the image. It is\n", "# optional if the image pixel values are in range of [0, 255] and the input\n", "# tensor is quantized to uint8. See the introduction for normalization and\n", "# quantization parameters below for more details.\n", "# https://tensorflow.google.cn/lite/models/convert/metadata#normalization_and_quantization_parameters)\n", "_INPUT_NORM_MEAN = 127.5\n", "_INPUT_NORM_STD = 127.5\n", "\n", "# Create the metadata writer.\n", "writer = ObjectDetectorWriter.create_for_inference(\n", " writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n", " [_LABEL_FILE])\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "QT0Oa0SU6uGS" }, "source": [ "\n", "\n", "### 图像分割器" ] }, { "cell_type": "markdown", "metadata": { "id": "XaFQmg-S6uGW" }, "source": [ "有关支持的模型格式的更多详细信息,请参阅[图像分割器模型兼容性要求](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/image_segmenter#model_compatibility_requirements)。" ] }, { "cell_type": "markdown", "metadata": { "id": "DiktANhj6uGX" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "H6Lrw3op6uGX" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import image_segmenter\n", "from tflite_support.metadata_writers import writer_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "9EFs8Oyi6uGX" }, "source": [ "第 2 步:下载示例图像分割器 [deeplabv3.tflite](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/deeplabv3.tflite) 和[标签文件](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/labelmap.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "feQDH0bN6uGY" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/deeplabv3.tflite -o deeplabv3.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_segmenter/labelmap.txt -o deeplabv3_labels.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "8LhiAbJM6uGY" }, "source": [ "第 3 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yot8xLI46uGY" }, "outputs": [], "source": [ "ImageSegmenterWriter = image_segmenter.MetadataWriter\n", "_MODEL_PATH = \"deeplabv3.tflite\"\n", "# Task Library expects label files that are in the same format as the one below.\n", "_LABEL_FILE = \"deeplabv3_labels.txt\"\n", "_SAVE_TO_PATH = \"deeplabv3_metadata.tflite\"\n", "# Normalization parameters is required when reprocessing the image. It is\n", "# optional if the image pixel values are in range of [0, 255] and the input\n", "# tensor is quantized to uint8. See the introduction for normalization and\n", "# quantization parameters below for more details.\n", "# https://tensorflow.google.cn/lite/models/convert/metadata#normalization_and_quantization_parameters)\n", "_INPUT_NORM_MEAN = 127.5\n", "_INPUT_NORM_STD = 127.5\n", "\n", "# Create the metadata writer.\n", "writer = ImageSegmenterWriter.create_for_inference(\n", " writer_utils.load_file(_MODEL_PATH), [_INPUT_NORM_MEAN], [_INPUT_NORM_STD],\n", " [_LABEL_FILE])\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "NnvM80e7AG-h" }, "source": [ " ###自然语言分类器" ] }, { "cell_type": "markdown", "metadata": { "id": "dfOPhFwOAG-k" }, "source": [ "有关支持的模型格式的更多详细信息,请参阅[自然语言分类器模型兼容性要求](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/nl_classifier#model_compatibility_requirements)。" ] }, { "cell_type": "markdown", "metadata": { "id": "WMJ7tvuwAG-k" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_FGVyb2iAG-k" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import nl_classifier\n", "from tflite_support.metadata_writers import metadata_info\n", "from tflite_support.metadata_writers import writer_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "iIg7rATpAG-l" }, "source": [ "第 2 步:下载示例自然语言分类器 [movie_review.tflite](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/movie_review.tflite)、[标签文件](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/labels.txt)和[词汇文件](https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/nl_classifier/vocab.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TzuQcti2AG-l" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/movie_review.tflite -o movie_review.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/nl_classifier/labels.txt -o movie_review_labels.txt\n", "!curl -L https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/nl_classifier/vocab.txt -o movie_review_vocab.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "BWxUtHdeAG-m" }, "source": [ "第 3 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "NGPWzRuHAG-m" }, "outputs": [], "source": [ "NLClassifierWriter = nl_classifier.MetadataWriter\n", "_MODEL_PATH = \"movie_review.tflite\"\n", "# Task Library expects label files and vocab files that are in the same formats\n", "# as the ones below.\n", "_LABEL_FILE = \"movie_review_labels.txt\"\n", "_VOCAB_FILE = \"movie_review_vocab.txt\"\n", "# NLClassifier supports tokenize input string using the regex tokenizer. See\n", "# more details about how to set up RegexTokenizer below:\n", "# https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/metadata_writers/metadata_info.py#L130\n", "_DELIM_REGEX_PATTERN = r\"[^\\w\\']+\"\n", "_SAVE_TO_PATH = \"moview_review_metadata.tflite\"\n", "\n", "# Create the metadata writer.\n", "writer = nl_classifier.MetadataWriter.create_for_inference(\n", " writer_utils.load_file(_MODEL_PATH),\n", " metadata_info.RegexTokenizerMd(_DELIM_REGEX_PATTERN, _VOCAB_FILE),\n", " [_LABEL_FILE])\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "qv0WDnzW711f" }, "source": [ "\n", "\n", "### 音频分类器" ] }, { "cell_type": "markdown", "metadata": { "id": "xqP7X8jww8pL" }, "source": [ "有关支持的模型格式的更多详细信息,请参阅[音频分类器模型兼容性要求](https://tensorflow.google.cn/lite/inference_with_metadata/task_library/audio_classifier#model_compatibility_requirements)。" ] }, { "cell_type": "markdown", "metadata": { "id": "7RToKepxw8pL" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JjddvTXKw8pL" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import audio_classifier\n", "from tflite_support.metadata_writers import metadata_info\n", "from tflite_support.metadata_writers import writer_utils" ] }, { "cell_type": "markdown", "metadata": { "id": "ar418rH6w8pL" }, "source": [ "第 2 步:下载示例音频分类器 [yamnet.tflite](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_wavin_quantized_mel_relu6.tflite) 和[标签文件](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_521_labels.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5eQY6znmw8pM" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_wavin_quantized_mel_relu6.tflite -o yamnet.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/audio_classifier/yamnet_521_labels.txt -o yamnet_labels.txt\n" ] }, { "cell_type": "markdown", "metadata": { "id": "1TYP5w0Ew8pM" }, "source": [ "第 3 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "MDlSczBQw8pM" }, "outputs": [], "source": [ "AudioClassifierWriter = audio_classifier.MetadataWriter\n", "_MODEL_PATH = \"yamnet.tflite\"\n", "# Task Library expects label files that are in the same format as the one below.\n", "_LABEL_FILE = \"yamnet_labels.txt\"\n", "# Expected sampling rate of the input audio buffer.\n", "_SAMPLE_RATE = 16000\n", "# Expected number of channels of the input audio buffer. Note, Task library only\n", "# support single channel so far.\n", "_CHANNELS = 1\n", "_SAVE_TO_PATH = \"yamnet_metadata.tflite\"\n", "\n", "# Create the metadata writer.\n", "writer = AudioClassifierWriter.create_for_inference(\n", " writer_utils.load_file(_MODEL_PATH), _SAMPLE_RATE, _CHANNELS, [_LABEL_FILE])\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "YoRLs84yNAJR" }, "source": [ "## 创建具有语义信息的模型元数据" ] }, { "cell_type": "markdown", "metadata": { "id": "cxXsOBknOGJ2" }, "source": [ "您可以通过 Metadata Writer API 填写有关模型和每个张量的更多描述性信息,以帮助提高对模型的理解。它可以通过每个元数据编写器中的 'create_from_metadata_info' 方法来完成。通常,您可以通过 'create_from_metadata_info' 的参数填写数据,即 `general_md`、`input_md` 和 `output_md`。请参阅下面的示例,为图像分类器创建丰富的模型元数据。" ] }, { "cell_type": "markdown", "metadata": { "id": "Q-LW6nrcQ9lv" }, "source": [ "第 1 步:导入所需的软件包。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "KsL_egYcRGw3" }, "outputs": [], "source": [ "from tflite_support.metadata_writers import image_classifier\n", "from tflite_support.metadata_writers import metadata_info\n", "from tflite_support.metadata_writers import writer_utils\n", "from tflite_support import metadata_schema_py_generated as _metadata_fb" ] }, { "cell_type": "markdown", "metadata": { "id": "0UWck_8uRboF" }, "source": [ "第 2 步:下载图像分类器示例,[mobilenet_v2_1.0_224.tflite](https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite) 和[标签文件](https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt)。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "TqJ-jh-PRVdk" }, "outputs": [], "source": [ "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/mobilenet_v2_1.0_224.tflite -o mobilenet_v2_1.0_224.tflite\n", "!curl -L https://github.com/tensorflow/tflite-support/raw/master/tensorflow_lite_support/metadata/python/tests/testdata/image_classifier/labels.txt -o mobilenet_labels.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "r4I5wJMQRxzb" }, "source": [ "第 3 步:创建模型和张量信息。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "urd7HDuaR_HC" }, "outputs": [], "source": [ "model_buffer = writer_utils.load_file(\"mobilenet_v2_1.0_224.tflite\")\n", "\n", "# Create general model information.\n", "general_md = metadata_info.GeneralMd(\n", " name=\"ImageClassifier\",\n", " version=\"v1\",\n", " description=(\"Identify the most prominent object in the image from a \"\n", " \"known set of categories.\"),\n", " author=\"TensorFlow Lite\",\n", " licenses=\"Apache License. Version 2.0\")\n", "\n", "# Create input tensor information.\n", "input_md = metadata_info.InputImageTensorMd(\n", " name=\"input image\",\n", " description=(\"Input image to be classified. The expected image is \"\n", " \"128 x 128, with three channels (red, blue, and green) per \"\n", " \"pixel. Each element in the tensor is a value between min and \"\n", " \"max, where (per-channel) min is [0] and max is [255].\"),\n", " norm_mean=[127.5],\n", " norm_std=[127.5],\n", " color_space_type=_metadata_fb.ColorSpaceType.RGB,\n", " tensor_type=writer_utils.get_input_tensor_types(model_buffer)[0])\n", "\n", "# Create output tensor information.\n", "output_md = metadata_info.ClassificationTensorMd(\n", " name=\"probability\",\n", " description=\"Probabilities of the 1001 labels respectively.\",\n", " label_files=[\n", " metadata_info.LabelFileMd(file_path=\"mobilenet_labels.txt\",\n", " locale=\"en\")\n", " ],\n", " tensor_type=writer_utils.get_output_tensor_types(model_buffer)[0])" ] }, { "cell_type": "markdown", "metadata": { "id": "N5aL5Uxkf4aO" }, "source": [ "第 4 步:创建元数据编写器并填充。" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "_iWIwdqEf_mr" }, "outputs": [], "source": [ "ImageClassifierWriter = image_classifier.MetadataWriter\n", "# Create the metadata writer.\n", "writer = ImageClassifierWriter.create_from_metadata_info(\n", " model_buffer, general_md, input_md, output_md)\n", "\n", "# Verify the metadata generated by metadata writer.\n", "print(writer.get_metadata_json())\n", "\n", "# Populate the metadata into the model.\n", "writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)" ] }, { "cell_type": "markdown", "metadata": { "id": "z78vuu6np5sb" }, "source": [ "## 读取填充到模型中的元数据" ] }, { "cell_type": "markdown", "metadata": { "id": "DnWt-4oOselo" }, "source": [ "您可以通过以下代码在 TFLite 模型中显示元数据和关联文件:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "5D13YPUsp5VT" }, "outputs": [], "source": [ "from tflite_support import metadata\n", "\n", "displayer = metadata.MetadataDisplayer.with_model_file(\"mobilenet_v2_1.0_224_metadata.tflite\")\n", "print(\"Metadata populated:\")\n", "print(displayer.get_metadata_json())\n", "\n", "print(\"Associated file(s) populated:\")\n", "for file_name in displayer.get_packed_associated_file_list():\n", " print(\"file name: \", file_name)\n", " print(\"file content:\")\n", " print(displayer.get_associated_file_buffer(file_name))" ] } ], "metadata": { "colab": { "collapsed_sections": [ "Mq-riZs-TJGt" ], "name": "metadata_writer_tutorial.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }