{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "g_nWetWWd_ns" }, "source": [ "##### Copyright 2021 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2022-12-15T01:10:23.850554Z", "iopub.status.busy": "2022-12-15T01:10:23.849903Z", "iopub.status.idle": "2022-12-15T01:10:23.854110Z", "shell.execute_reply": "2022-12-15T01:10:23.853579Z" }, "id": "2pHVBk_seED1" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "M7vSdG6sAIQn" }, "source": [ "# TensorFlow Lite 모델 분석기" ] }, { "cell_type": "markdown", "metadata": { "id": "fwc5GKHBASdc" }, "source": [ "
![]() | \n",
" ![]() | \n",
" ![]() | \n",
" ![]() | \n",
"
gpu_compatibility=True
옵션을 제공하여 주어진 모델의 GPU 대리자 호환성을 확인하는 방법을 제공합니다.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sVGC1oX33RkV"
},
"source": [
"### 사례 1: 모델이 호환되지 않는 경우\n",
"\n",
"다음 코드는 GPU 대리자와 호환되지 않는 2D 텐서 및 `tf.slice`와 함께 `tf.cosh`를 사용하는 간단한 tf.function에 대해 `gpu_compatibility=True` 옵션을 사용하는 방법을 보여줍니다.\n",
"\n",
"호환성 문제가 있는 모든 노드마다 `GPU COMPATIBILITY WARNING`가 표시됩니다."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"execution": {
"iopub.execute_input": "2022-12-15T01:11:03.814635Z",
"iopub.status.busy": "2022-12-15T01:11:03.813976Z",
"iopub.status.idle": "2022-12-15T01:11:03.912886Z",
"shell.execute_reply": "2022-12-15T01:11:03.912213Z"
},
"id": "9GEg5plIzD-3"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== TFLite ModelAnalyzer ===\n",
"\n",
"Your TFLite model has '1' subgraph(s). In the subgraph description below,\n",
"T# represents the Tensor numbers. For example, in Subgraph#0, the FlexCosh op takes\n",
"tensor #0 as input and produces tensor #2 as output.\n",
"\n",
"Subgraph#0 main(T#0) -> [T#4]\n",
" Op#0 FlexCosh(T#0) -> [T#2]\n",
"GPU COMPATIBILITY WARNING: Not supported custom op FlexCosh\n",
" Op#1 SLICE(T#0, T#1[1, 1], T#1[1, 1]) -> [T#3]\n",
"GPU COMPATIBILITY WARNING: SLICE supports for 3 or 4 dimensional tensors only, but node has 2 dimensional tensors.\n",
" Op#2 ADD(T#2, T#3) -> [T#4]\n",
"\n",
"GPU COMPATIBILITY WARNING: Subgraph#0 has GPU delegate compatibility issues at nodes 0, 1 with TFLite runtime version 2.11.0\n",
"\n",
"Tensors of Subgraph#0\n",
" T#0(x) shape:[4, 4], type:FLOAT32\n",
" T#1(Slice/begin) shape:[2], type:INT32 RO 8 bytes, buffer: 2, data:[1, 1]\n",
" T#2(Cosh) shape:[4, 4], type:FLOAT32\n",
" T#3(Slice) shape:[1, 1], type:FLOAT32\n",
" T#4(Identity) shape:[4, 4], type:FLOAT32\n",
"\n",
"---------------------------------------------------------------\n",
" Model size: 1124 bytes\n",
" Non-data buffer size: 1008 bytes (89.68 %)\n",
" Total data buffer size: 116 bytes (10.32 %)\n",
" (Zero value buffers): 0 bytes (00.00 %)\n",
"\n",
"* Buffers of TFLite model are mostly used for constant tensors.\n",
" And zero value buffers are buffers filled with zeros.\n",
" Non-data buffers area are used to store operators, subgraphs and etc.\n",
" You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2022-12-15 01:11:03.880058: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.\n",
"2022-12-15 01:11:03.880096: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.\n",
"2022-12-15 01:11:03.896718: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2046] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):\n",
"Flex ops: FlexCosh\n",
"Details:\n",
"\ttf.Cosh(tensor<4x4xf32>) -> (tensor<4x4xf32>) : {device = \"\"}\n",
"See instructions: https://www.tensorflow.org/lite/guide/ops_select\n"
]
}
],
"source": [
"import tensorflow as tf\n",
"\n",
"@tf.function(input_signature=[\n",
" tf.TensorSpec(shape=[4, 4], dtype=tf.float32)\n",
"])\n",
"def func(x):\n",
" return tf.cosh(x) + tf.slice(x, [1, 1], [1, 1])\n",
"\n",
"converter = tf.lite.TFLiteConverter.from_concrete_functions(\n",
" [func.get_concrete_function()], func)\n",
"converter.target_spec.supported_ops = [\n",
" tf.lite.OpsSet.TFLITE_BUILTINS,\n",
" tf.lite.OpsSet.SELECT_TF_OPS,\n",
"]\n",
"fb_model = converter.convert()\n",
"\n",
"tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BFU7HYb_2a8M"
},
"source": [
"### 사례 2: 모델이 호환되는 경우\n",
"\n",
"이 예에서 주어진 모델은 GPU 대리자와 호환됩니다.\n",
"\n",
"**참고:** 도구가 호환성 문제를 찾지 못하더라도 모델이 모든 장치에서 GPU 대리자와 잘 작동한다는 보장은 없습니다. 대상 OpenGL 백엔드에서 `CL_DEVICE_IMAGE_SUPPORT` 요소 누락과 같은 런타임 비호환성이 발생할 수 있습니다.\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"execution": {
"iopub.execute_input": "2022-12-15T01:11:03.917150Z",
"iopub.status.busy": "2022-12-15T01:11:03.916488Z",
"iopub.status.idle": "2022-12-15T01:11:04.954932Z",
"shell.execute_reply": "2022-12-15T01:11:04.954083Z"
},
"id": "85RgG6tQ3ABT"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp72m3ikyw/assets\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp72m3ikyw/assets\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== TFLite ModelAnalyzer ===\n",
"\n",
"Your TFLite model has '1' subgraph(s). In the subgraph description below,\n",
"T# represents the Tensor numbers. For example, in Subgraph#0, the RESHAPE op takes\n",
"tensor #0 and tensor #1 as input and produces tensor #4 as output.\n",
"\n",
"Subgraph#0 main(T#0) -> [T#6]\n",
" Op#0 RESHAPE(T#0, T#1[-1, 16384]) -> [T#4]\n",
" Op#1 FULLY_CONNECTED(T#4, T#2, T#-1) -> [T#5]\n",
" Op#2 FULLY_CONNECTED(T#5, T#3, T#-1) -> [T#6]\n",
"\n",
"Tensors of Subgraph#0\n",
" T#0(serving_default_flatten_2_input:0) shape_signature:[-1, 128, 128], type:FLOAT32\n",
" T#1(sequential_1/flatten_2/Const) shape:[2], type:INT32 RO 8 bytes, buffer: 2, data:[-1, 16384]\n",
" T#2(sequential_1/dense_2/MatMul1) shape:[256, 16384], type:FLOAT32 RO 16777216 bytes, buffer: 3, data:[-0.00593336, -0.0180754, 0.00914702, 0.00351369, -0.015456, ...]\n",
" T#3(sequential_1/dense_3/MatMul) shape:[10, 256], type:FLOAT32 RO 10240 bytes, buffer: 4, data:[-0.0970062, -0.057773, 0.1411, 0.119214, -0.0340087, ...]\n",
" T#4(sequential_1/flatten_2/Reshape) shape_signature:[-1, 16384], type:FLOAT32\n",
" T#5(sequential_1/dense_2/MatMul;sequential_1/dense_2/Relu;sequential_1/dense_2/BiasAdd) shape_signature:[-1, 256], type:FLOAT32\n",
" T#6(StatefulPartitionedCall:0) shape_signature:[-1, 10], type:FLOAT32\n",
"\n",
"\n",
"Your model looks compatible with GPU delegate with TFLite runtime version 2.11.0.\n",
"But it doesn't guarantee that your model works well with GPU delegate.\n",
"There could be some runtime incompatibililty happen.\n",
"---------------------------------------------------------------\n",
"Your TFLite model has '1' signature_def(s).\n",
"\n",
"Signature#0 key: 'serving_default'\n",
"- Subgraph: Subgraph#0\n",
"- Inputs: \n",
" 'flatten_2_input' : T#0\n",
"- Outputs: \n",
" 'dense_3' : T#6\n",
"\n",
"---------------------------------------------------------------\n",
" Model size: 16789068 bytes\n",
" Non-data buffer size: 1504 bytes (00.01 %)\n",
" Total data buffer size: 16787564 bytes (99.99 %)\n",
" (Zero value buffers): 0 bytes (00.00 %)\n",
"\n",
"* Buffers of TFLite model are mostly used for constant tensors.\n",
" And zero value buffers are buffers filled with zeros.\n",
" Non-data buffers area are used to store operators, subgraphs and etc.\n",
" You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2022-12-15 01:11:04.612193: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.\n",
"2022-12-15 01:11:04.612233: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.\n"
]
}
],
"source": [
"model = tf.keras.models.Sequential([\n",
" tf.keras.layers.Flatten(input_shape=(128, 128)),\n",
" tf.keras.layers.Dense(256, activation='relu'),\n",
" tf.keras.layers.Dropout(0.2),\n",
" tf.keras.layers.Dense(10)\n",
"])\n",
"\n",
"fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()\n",
"\n",
"tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "model_analyzer.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 0
}