{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "6bYaCABobL5q" }, "source": [ "##### Copyright 2018 The TensorFlow Authors." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "cellView": "form", "execution": { "iopub.execute_input": "2022-12-14T22:30:39.015934Z", "iopub.status.busy": "2022-12-14T22:30:39.015584Z", "iopub.status.idle": "2022-12-14T22:30:39.018994Z", "shell.execute_reply": "2022-12-14T22:30:39.018465Z" }, "id": "FlUw7tSKbtg4" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "08OTcmxgqkc2" }, "source": [ "# TF 1.x および compat.v1 API シンボルを自動的に書き換える\n", "\n", "\n", " \n", " \n", " \n", " \n", "
TensorFlow.org で表示\n", " Google Colab で実行\n", " GitHub でソースを表示ノートブックをダウンロード
\n" ] }, { "cell_type": "markdown", "metadata": { "id": "hZSaRPoybOp5" }, "source": [ "TensorFlow 2.x には引数の並べ替え、シンボル名の変更、パラメータのデフォルト値の変更など、多くの API の変更が含まれています。これらの変更をすべて手動で実行するのは退屈で、エラーが発生しやすくなります。変更を合理化し、可能な限りシームレスに TF 2.x に移行できるよう、TensorFlow チームはレガシーコードの新しい API への移行を支援する `tf_upgrade_v2` ユーティリティを作成しています。\n", "\n", "注意: tf_upgrade_v2 は TensorFlow 1.13 以降(すべての TF 2.0 ビルドを含む)に自動的にインストールされています。\n", "\n", "一般的な使用方法は以下のとおりです。\n", "\n", "
\n",
    "tf_upgrade_v2 \\\n",
    "  --intree my_project/ \\\n",
    "  --outtree my_project_v2/ \\\n",
    "  --reportfile report.txt\n",
    "
\n", "\n", "これにより、既存の TensorFlow 1.x Python スクリプトが TensorFlow 2.0 に変換され、アップグレード処理が高速化します。\n", "\n", "多くの API は自動的に移行できませんが、変換スクリプトは多くの機械的な API 変換を自動化します。また、TF2 の動作や API と完全に互換性のあるコードを作成することもできません。したがって、これは移行プロセスの一部にすぎません。" ] }, { "cell_type": "markdown", "metadata": { "id": "gP9v2vgptdfi" }, "source": [ "## 互換性モジュール\n", "\n", "一部の API シンボルは、文字列置換を使用するだけではアップグレードできません。自動的にアップグレードできないものは、`compat.v1` モジュール内の場所にマッピングされます。このモジュールは、`tf.foo` のような TF 1.x シンボルを同等の `tf.compat.v1.foo` リファレンスに置き換えます。`import tensorflow.compat.v1 as tf` 経由で TF をインポートして `compat.v1` API を既に使用している場合、`tf_upgrade_v2` スクリプトはこれらの使用法を非互換 API に変換するように試みます。一部の `compat.v1` API は TF2.x の動作と互換性がありますが、多くは互換性がないことに注意してください。したがって、置換を手動で確認し、できるだけ早く `tf.compat.v1` 名前空間ではなく `tf.*` 名前空間の新しい API に移行することを推薦します。\n", "\n", "TensorFlow 2.x で廃止されているモジュール(tf.flags や tf.contrib など)があるため、一部の変更は compat.v1 に切り替えても対応できません。このようなコードをアップグレードするには、追加のライブラリ(absl.flags など)を使用するか、tensorflow/addons にあるパッケージに切り替える必要があるかもしれません。\n" ] }, { "cell_type": "markdown", "metadata": { "id": "s78bbfjkXYb7" }, "source": [ "## 推奨アップグレード手順\n", "\n", "このガイドの残りの部分では、シンボル書き換えスクリプトの使用方法を示します。スクリプトは簡単に使用できますが、次のプロセスの一部としてスクリプトを使用することを強くお勧めします。\n", "\n", "1. 単体テスト: アップグレード対象のコードにカバレッジ率が適度な単体テストスイートを確実に用意します。このコードは Python で記述されているため、さまざまなミスから保護されることはありません。また、すべての依存物が TensorFlow 2.0 との互換性を確保できるようにアップグレード済みであることを確認してください。\n", "\n", "2. **TensorFlow 1.15** のインストール: TensorFlow を最新の TensorFlow 1.x バージョン(1.15 以上)にアップグレードします。このバージョンには `tf.compat.v2` に最終的な TensorFlow 2.0 API が含まれています。\n", "\n", "3. 1.14 でのテスト: この時点で単体テストに合格することを確認します。単体テストはアップグレード中に何度も実行することになるため、安全な状態で開始することが重要です。\n", "\n", "4. アップグレードスクリプトの実行: テストを含むソースツリー全体で tf_upgrade_v2 を実行します。これにより、TensorFlow 2.0 で利用できるシンボルのみを使用する形式にコードがアップグレードされます。廃止されたシンボルは tf.compat.v1 でアクセスできます。このようなシンボルは最終的には手動での対応が必要ですが、すぐに対応する必要はありません。\n", "\n", "5. 変換後のテストを TensorFlow 1.14 で実行: コードは引き続き TensorFlow 1.14 で正常に動作するはずです。もう一度単体テストを実行してください。テストで何らかのエラーが発生する場合は、アップグレードスクリプトにバグがあります。その場合はお知らせください。\n", "\n", "6. アップグレードレポートの警告とエラーを確認: このスクリプトは再確認が必要な変換や、必要な手動対応を説明するレポートファイルを書き出します。たとえば、残っているすべての contrib インスタンスを手動で削除する必要がある場合などです。RFC で詳細を確認してください。\n", "\n", "7. **TensorFlow 2.x のインストール**: この時点で、従来の動作で実行している場合でも、TensorFlow 2.x バイナリに安全に切り替えることができます。\n", "\n", "8. **`v1.disable_v2_behavior` でのテスト**: テストの main 関数で `v1.disable_v2_behavior()` を使用してテストをもう一度実行すると、1.15 で実行した場合と同じ結果になるはずです。\n", "\n", "9. **V2 の動作を有効にする**: TF2 バイナリを使用してテストが機能するようになったので、`tf.estimator` を回避し、サポートされている TF2 の動作のみを使用するようにコードを移行することができます(TF2 の動作を無効にすることはありません)。 詳細については、[移行ガイド](https://tensorflow.org/guide/migrate)を参照してください。" ] }, { "cell_type": "markdown", "metadata": { "id": "6pwSAQEwvscP" }, "source": [ "## シンボル書き換え `tf_upgrade_v2` スクリプトの使用\n" ] }, { "cell_type": "markdown", "metadata": { "id": "I9NCvDt5GwX4" }, "source": [ "### セットアップ\n", "\n", "始める前に、TensorFlow 2.x がインストールされていることを確認します。" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:39.022596Z", "iopub.status.busy": "2022-12-14T22:30:39.022081Z", "iopub.status.idle": "2022-12-14T22:30:40.927416Z", "shell.execute_reply": "2022-12-14T22:30:40.926751Z" }, "id": "DWVYbvi1WCeY" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-12-14 22:30:39.957564: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\n", "2022-12-14 22:30:39.957654: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\n", "2022-12-14 22:30:39.957663: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "2.11.0\n" ] } ], "source": [ "import tensorflow as tf\n", "\n", "print(tf.__version__)" ] }, { "cell_type": "markdown", "metadata": { "id": "Ycy3B5PNGutU" }, "source": [ "テスト対象のコードがある tensorflow/models git リポジトリをクローンします。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:40.930752Z", "iopub.status.busy": "2022-12-14T22:30:40.930350Z", "iopub.status.idle": "2022-12-14T22:30:52.352416Z", "shell.execute_reply": "2022-12-14T22:30:52.351586Z" }, "id": "jyckoWyAZEhZ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cloning into 'models'...\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Enumerating objects: 2927, done.\u001b[K\r\n", "remote: Counting objects: 0% (1/2927)\u001b[K\r", "remote: Counting objects: 1% (30/2927)\u001b[K\r", "remote: Counting objects: 2% (59/2927)\u001b[K\r", "remote: Counting objects: 3% (88/2927)\u001b[K\r", "remote: Counting objects: 4% (118/2927)\u001b[K\r", "remote: Counting objects: 5% (147/2927)\u001b[K\r", "remote: Counting objects: 6% (176/2927)\u001b[K\r", "remote: Counting objects: 7% (205/2927)\u001b[K\r", "remote: Counting objects: 8% (235/2927)\u001b[K\r", "remote: Counting objects: 9% (264/2927)\u001b[K\r", "remote: Counting objects: 10% (293/2927)\u001b[K\r", "remote: Counting objects: 11% (322/2927)\u001b[K\r", "remote: Counting objects: 12% (352/2927)\u001b[K\r", "remote: Counting objects: 13% (381/2927)\u001b[K\r", "remote: Counting objects: 14% (410/2927)\u001b[K\r", "remote: Counting objects: 15% (440/2927)\u001b[K\r", "remote: Counting objects: 16% (469/2927)\u001b[K\r", "remote: Counting objects: 17% (498/2927)\u001b[K\r", "remote: Counting objects: 18% (527/2927)\u001b[K\r", "remote: Counting objects: 19% (557/2927)\u001b[K\r", "remote: Counting objects: 20% (586/2927)\u001b[K\r", "remote: Counting objects: 21% (615/2927)\u001b[K\r", "remote: Counting objects: 22% (644/2927)\u001b[K\r", "remote: Counting objects: 23% (674/2927)\u001b[K\r", "remote: Counting objects: 24% (703/2927)\u001b[K\r", "remote: Counting objects: 25% (732/2927)\u001b[K\r", "remote: Counting objects: 26% (762/2927)\u001b[K\r", "remote: Counting objects: 27% (791/2927)\u001b[K\r", "remote: Counting objects: 28% (820/2927)\u001b[K\r", "remote: Counting objects: 29% (849/2927)\u001b[K\r", "remote: Counting objects: 30% (879/2927)\u001b[K\r", "remote: Counting objects: 31% (908/2927)\u001b[K\r", "remote: Counting objects: 32% (937/2927)\u001b[K\r", "remote: Counting objects: 33% (966/2927)\u001b[K\r", "remote: Counting objects: 34% (996/2927)\u001b[K\r", "remote: Counting objects: 35% (1025/2927)\u001b[K\r", "remote: Counting objects: 36% (1054/2927)\u001b[K\r", "remote: Counting objects: 37% (1083/2927)\u001b[K\r", "remote: Counting objects: 38% (1113/2927)\u001b[K\r", "remote: Counting objects: 39% (1142/2927)\u001b[K\r", "remote: Counting objects: 40% (1171/2927)\u001b[K\r", "remote: Counting objects: 41% (1201/2927)\u001b[K\r", "remote: Counting objects: 42% (1230/2927)\u001b[K\r", "remote: Counting objects: 43% (1259/2927)\u001b[K\r", "remote: Counting objects: 44% (1288/2927)\u001b[K\r", "remote: Counting objects: 45% (1318/2927)\u001b[K\r", "remote: Counting objects: 46% (1347/2927)\u001b[K\r", "remote: Counting objects: 47% (1376/2927)\u001b[K\r", "remote: Counting objects: 48% (1405/2927)\u001b[K\r", "remote: Counting objects: 49% (1435/2927)\u001b[K\r", "remote: Counting objects: 50% (1464/2927)\u001b[K\r", "remote: Counting objects: 51% (1493/2927)\u001b[K\r", "remote: Counting objects: 52% (1523/2927)\u001b[K\r", "remote: Counting objects: 53% (1552/2927)\u001b[K\r", "remote: Counting objects: 54% (1581/2927)\u001b[K\r", "remote: Counting objects: 55% (1610/2927)\u001b[K\r", "remote: Counting objects: 56% (1640/2927)\u001b[K\r", "remote: Counting objects: 57% (1669/2927)\u001b[K\r", "remote: Counting objects: 58% (1698/2927)\u001b[K\r", "remote: Counting objects: 59% (1727/2927)\u001b[K\r", "remote: Counting objects: 60% (1757/2927)\u001b[K\r", "remote: Counting objects: 61% (1786/2927)\u001b[K\r", "remote: Counting objects: 62% (1815/2927)\u001b[K\r", "remote: Counting objects: 63% (1845/2927)\u001b[K\r", "remote: Counting objects: 64% (1874/2927)\u001b[K\r", "remote: Counting objects: 65% (1903/2927)\u001b[K\r", "remote: Counting objects: 66% (1932/2927)\u001b[K\r", "remote: Counting objects: 67% (1962/2927)\u001b[K\r", "remote: Counting objects: 68% (1991/2927)\u001b[K\r", "remote: Counting objects: 69% (2020/2927)\u001b[K\r", "remote: Counting objects: 70% (2049/2927)\u001b[K\r", "remote: Counting objects: 71% (2079/2927)\u001b[K\r", "remote: Counting objects: 72% (2108/2927)\u001b[K\r", "remote: Counting objects: 73% (2137/2927)\u001b[K\r", "remote: Counting objects: 74% (2166/2927)\u001b[K\r", "remote: Counting objects: 75% (2196/2927)\u001b[K\r", "remote: Counting objects: 76% (2225/2927)\u001b[K\r", "remote: Counting objects: 77% (2254/2927)\u001b[K\r", "remote: Counting objects: 78% (2284/2927)\u001b[K\r", "remote: Counting objects: 79% (2313/2927)\u001b[K\r", "remote: Counting objects: 80% (2342/2927)\u001b[K\r", "remote: Counting objects: 81% (2371/2927)\u001b[K\r", "remote: Counting objects: 82% (2401/2927)\u001b[K\r", "remote: Counting objects: 83% (2430/2927)\u001b[K\r", "remote: Counting objects: 84% (2459/2927)\u001b[K\r", "remote: Counting objects: 85% (2488/2927)\u001b[K\r", "remote: Counting objects: 86% (2518/2927)\u001b[K\r", "remote: Counting objects: 87% (2547/2927)\u001b[K\r", "remote: Counting objects: 88% (2576/2927)\u001b[K\r", "remote: Counting objects: 89% (2606/2927)\u001b[K\r", "remote: Counting objects: 90% (2635/2927)\u001b[K\r", "remote: Counting objects: 91% (2664/2927)\u001b[K\r", "remote: Counting objects: 92% (2693/2927)\u001b[K\r", "remote: Counting objects: 93% (2723/2927)\u001b[K\r", "remote: Counting objects: 94% (2752/2927)\u001b[K\r", "remote: Counting objects: 95% (2781/2927)\u001b[K\r", "remote: Counting objects: 96% (2810/2927)\u001b[K\r", "remote: Counting objects: 97% (2840/2927)\u001b[K\r", "remote: Counting objects: 98% (2869/2927)\u001b[K\r", "remote: Counting objects: 99% (2898/2927)\u001b[K\r", "remote: Counting objects: 100% (2927/2927)\u001b[K\r", "remote: Counting objects: 100% (2927/2927), done.\u001b[K\r\n", "remote: Compressing objects: 0% (1/2428)\u001b[K\r", "remote: Compressing objects: 1% (25/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 2% (49/2428)\u001b[K\r", "remote: Compressing objects: 3% (73/2428)\u001b[K\r", "remote: Compressing objects: 4% (98/2428)\u001b[K\r", "remote: Compressing objects: 5% (122/2428)\u001b[K\r", "remote: Compressing objects: 6% (146/2428)\u001b[K\r", "remote: Compressing objects: 7% (170/2428)\u001b[K\r", "remote: Compressing objects: 8% (195/2428)\u001b[K\r", "remote: Compressing objects: 9% (219/2428)\u001b[K\r", "remote: Compressing objects: 10% (243/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 11% (268/2428)\u001b[K\r", "remote: Compressing objects: 12% (292/2428)\u001b[K\r", "remote: Compressing objects: 13% (316/2428)\u001b[K\r", "remote: Compressing objects: 14% (340/2428)\u001b[K\r", "remote: Compressing objects: 15% (365/2428)\u001b[K\r", "remote: Compressing objects: 16% (389/2428)\u001b[K\r", "remote: Compressing objects: 17% (413/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 18% (438/2428)\u001b[K\r", "remote: Compressing objects: 19% (462/2428)\u001b[K\r", "remote: Compressing objects: 20% (486/2428)\u001b[K\r", "remote: Compressing objects: 21% (510/2428)\u001b[K\r", "remote: Compressing objects: 22% (535/2428)\u001b[K\r", "remote: Compressing objects: 23% (559/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 24% (583/2428)\u001b[K\r", "remote: Compressing objects: 25% (607/2428)\u001b[K\r", "remote: Compressing objects: 26% (632/2428)\u001b[K\r", "remote: Compressing objects: 27% (656/2428)\u001b[K\r", "remote: Compressing objects: 28% (680/2428)\u001b[K\r", "remote: Compressing objects: 29% (705/2428)\u001b[K\r", "remote: Compressing objects: 30% (729/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 31% (753/2428)\u001b[K\r", "remote: Compressing objects: 32% (777/2428)\u001b[K\r", "remote: Compressing objects: 33% (802/2428)\u001b[K\r", "remote: Compressing objects: 34% (826/2428)\u001b[K\r", "remote: Compressing objects: 35% (850/2428)\u001b[K\r", "remote: Compressing objects: 36% (875/2428)\u001b[K\r", "remote: Compressing objects: 37% (899/2428)\u001b[K\r", "remote: Compressing objects: 38% (923/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 39% (947/2428)\u001b[K\r", "remote: Compressing objects: 40% (972/2428)\u001b[K\r", "remote: Compressing objects: 41% (996/2428)\u001b[K\r", "remote: Compressing objects: 42% (1020/2428)\u001b[K\r", "remote: Compressing objects: 43% (1045/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 44% (1069/2428)\u001b[K\r", "remote: Compressing objects: 45% (1093/2428)\u001b[K\r", "remote: Compressing objects: 46% (1117/2428)\u001b[K\r", "remote: Compressing objects: 47% (1142/2428)\u001b[K\r", "remote: Compressing objects: 48% (1166/2428)\u001b[K\r", "remote: Compressing objects: 49% (1190/2428)\u001b[K\r", "remote: Compressing objects: 50% (1214/2428)\u001b[K\r", "remote: Compressing objects: 51% (1239/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 52% (1263/2428)\u001b[K\r", "remote: Compressing objects: 53% (1287/2428)\u001b[K\r", "remote: Compressing objects: 54% (1312/2428)\u001b[K\r", "remote: Compressing objects: 55% (1336/2428)\u001b[K\r", "remote: Compressing objects: 56% (1360/2428)\u001b[K\r", "remote: Compressing objects: 57% (1384/2428)\u001b[K\r", "remote: Compressing objects: 58% (1409/2428)\u001b[K\r", "remote: Compressing objects: 59% (1433/2428)\u001b[K\r", "remote: Compressing objects: 60% (1457/2428)\u001b[K\r", "remote: Compressing objects: 61% (1482/2428)\u001b[K\r", "remote: Compressing objects: 62% (1506/2428)\u001b[K\r", "remote: Compressing objects: 63% (1530/2428)\u001b[K\r", "remote: Compressing objects: 64% (1554/2428)\u001b[K\r", "remote: Compressing objects: 65% (1579/2428)\u001b[K\r", "remote: Compressing objects: 66% (1603/2428)\u001b[K\r", "remote: Compressing objects: 67% (1627/2428)\u001b[K\r", "remote: Compressing objects: 68% (1652/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 69% (1676/2428)\u001b[K\r", "remote: Compressing objects: 70% (1700/2428)\u001b[K\r", "remote: Compressing objects: 71% (1724/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 72% (1749/2428)\u001b[K\r", "remote: Compressing objects: 73% (1773/2428)\u001b[K\r", "remote: Compressing objects: 74% (1797/2428)\u001b[K\r", "remote: Compressing objects: 75% (1821/2428)\u001b[K\r", "remote: Compressing objects: 76% (1846/2428)\u001b[K\r", "remote: Compressing objects: 77% (1870/2428)\u001b[K\r", "remote: Compressing objects: 78% (1894/2428)\u001b[K\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "remote: Compressing objects: 79% (1919/2428)\u001b[K\r", "remote: Compressing objects: 80% (1943/2428)\u001b[K\r", "remote: Compressing objects: 81% (1967/2428)\u001b[K\r", "remote: Compressing objects: 82% (1991/2428)\u001b[K\r", "remote: Compressing objects: 83% (2016/2428)\u001b[K\r", "remote: Compressing objects: 84% (2040/2428)\u001b[K\r", "remote: Compressing objects: 85% (2064/2428)\u001b[K\r", "remote: Compressing objects: 86% (2089/2428)\u001b[K\r", "remote: Compressing objects: 87% (2113/2428)\u001b[K\r", "remote: Compressing objects: 88% (2137/2428)\u001b[K\r", "remote: Compressing objects: 89% (2161/2428)\u001b[K\r", "remote: Compressing objects: 90% (2186/2428)\u001b[K\r", "remote: Compressing objects: 91% (2210/2428)\u001b[K\r", "remote: Compressing objects: 92% (2234/2428)\u001b[K\r", "remote: Compressing objects: 93% (2259/2428)\u001b[K\r", "remote: Compressing objects: 94% (2283/2428)\u001b[K\r", "remote: Compressing objects: 95% (2307/2428)\u001b[K\r", "remote: Compressing objects: 96% (2331/2428)\u001b[K\r", "remote: Compressing objects: 97% (2356/2428)\u001b[K\r", "remote: Compressing objects: 98% (2380/2428)\u001b[K\r", "remote: Compressing objects: 99% (2404/2428)\u001b[K\r", "remote: Compressing objects: 100% (2428/2428)\u001b[K\r", "remote: Compressing objects: 100% (2428/2428), done.\u001b[K\r\n", "Receiving objects: 0% (1/2927)\r", "Receiving objects: 1% (30/2927)\r", "Receiving objects: 2% (59/2927)\r", "Receiving objects: 3% (88/2927)\r", "Receiving objects: 4% (118/2927)\r", "Receiving objects: 5% (147/2927)\r", "Receiving objects: 6% (176/2927)\r", "Receiving objects: 7% (205/2927)\r", "Receiving objects: 8% (235/2927)\r", "Receiving objects: 9% (264/2927)\r", "Receiving objects: 10% (293/2927)\r", "Receiving objects: 11% (322/2927)\r", "Receiving objects: 12% (352/2927)\r", "Receiving objects: 13% (381/2927)\r", "Receiving objects: 14% (410/2927)\r", "Receiving objects: 15% (440/2927)\r", "Receiving objects: 16% (469/2927)\r", "Receiving objects: 17% (498/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 18% (527/2927)\r", "Receiving objects: 19% (557/2927)\r", "Receiving objects: 20% (586/2927)\r", "Receiving objects: 21% (615/2927)\r", "Receiving objects: 22% (644/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 23% (674/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 24% (703/2927)\r", "Receiving objects: 25% (732/2927)\r", "Receiving objects: 26% (762/2927)\r", "Receiving objects: 27% (791/2927)\r", "Receiving objects: 28% (820/2927)\r", "Receiving objects: 29% (849/2927)\r", "Receiving objects: 30% (879/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 31% (908/2927)\r", "Receiving objects: 32% (937/2927)\r", "Receiving objects: 33% (966/2927)\r", "Receiving objects: 34% (996/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 35% (1025/2927)\r", "Receiving objects: 36% (1054/2927)\r", "Receiving objects: 37% (1083/2927)\r", "Receiving objects: 38% (1113/2927)\r", "Receiving objects: 39% (1142/2927)\r", "Receiving objects: 40% (1171/2927)\r", "Receiving objects: 41% (1201/2927)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 41% (1226/2927), 71.21 MiB | 71.34 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 42% (1230/2927), 100.59 MiB | 67.14 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 43% (1259/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 44% (1288/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 45% (1318/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 46% (1347/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 47% (1376/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 48% (1405/2927), 100.59 MiB | 67.14 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 49% (1435/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 50% (1464/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 51% (1493/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 52% (1523/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 53% (1552/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 54% (1581/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 55% (1610/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 56% (1640/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 57% (1669/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 58% (1698/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 59% (1727/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 60% (1757/2927), 100.59 MiB | 67.14 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 61% (1786/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 62% (1815/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 63% (1845/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 64% (1874/2927), 100.59 MiB | 67.14 MiB/s\r", "Receiving objects: 65% (1903/2927), 100.59 MiB | 67.14 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 65% (1912/2927), 131.52 MiB | 65.82 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 65% (1912/2927), 189.92 MiB | 63.35 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 66% (1932/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 67% (1962/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 68% (1991/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 69% (2020/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 70% (2049/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 71% (2079/2927), 189.92 MiB | 63.35 MiB/s\r", "Receiving objects: 72% (2108/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 73% (2137/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 74% (2166/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 75% (2196/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 76% (2225/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 77% (2254/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 78% (2284/2927), 218.00 MiB | 62.32 MiB/s\r", "Receiving objects: 79% (2313/2927), 218.00 MiB | 62.32 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 80% (2342/2927), 218.00 MiB | 62.32 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 81% (2371/2927), 218.00 MiB | 62.32 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 81% (2371/2927), 244.10 MiB | 61.05 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 82% (2401/2927), 244.10 MiB | 61.05 MiB/s\r", "Receiving objects: 83% (2430/2927), 244.10 MiB | 61.05 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 84% (2459/2927), 269.59 MiB | 59.93 MiB/s\r", "Receiving objects: 85% (2488/2927), 269.59 MiB | 59.93 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 85% (2501/2927), 298.92 MiB | 59.24 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 86% (2518/2927), 298.92 MiB | 59.24 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 87% (2547/2927), 298.92 MiB | 59.24 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 88% (2576/2927), 298.92 MiB | 59.24 MiB/s\r", "Receiving objects: 89% (2606/2927), 298.92 MiB | 59.24 MiB/s\r", "Receiving objects: 90% (2635/2927), 298.92 MiB | 59.24 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 90% (2643/2927), 352.54 MiB | 56.00 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 91% (2664/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 92% (2693/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 93% (2723/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 94% (2752/2927), 352.54 MiB | 56.00 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 95% (2781/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 96% (2810/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 97% (2840/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 98% (2869/2927), 352.54 MiB | 56.00 MiB/s\r", "remote: Total 2927 (delta 503), reused 2114 (delta 424), pack-reused 0\u001b[K\r\n", "Receiving objects: 99% (2898/2927), 352.54 MiB | 56.00 MiB/s\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Receiving objects: 100% (2927/2927), 352.54 MiB | 56.00 MiB/s\r", "Receiving objects: 100% (2927/2927), 369.04 MiB | 58.95 MiB/s, done.\r\n", "Resolving deltas: 0% (0/503)\r", "Resolving deltas: 1% (6/503)\r", "Resolving deltas: 2% (11/503)\r", "Resolving deltas: 3% (16/503)\r", "Resolving deltas: 4% (22/503)\r", "Resolving deltas: 5% (28/503)\r", "Resolving deltas: 6% (32/503)\r", "Resolving deltas: 7% (36/503)\r", "Resolving deltas: 8% (42/503)\r", "Resolving deltas: 9% (46/503)\r", "Resolving deltas: 10% (51/503)\r", "Resolving deltas: 11% (56/503)\r", "Resolving deltas: 12% (62/503)\r", "Resolving deltas: 13% (68/503)\r", "Resolving deltas: 14% (73/503)\r", "Resolving deltas: 15% (76/503)\r", "Resolving deltas: 16% (81/503)\r", "Resolving deltas: 17% (86/503)\r", "Resolving deltas: 18% (94/503)\r", "Resolving deltas: 19% (96/503)\r", "Resolving deltas: 20% (101/503)\r", "Resolving deltas: 21% (106/503)\r", "Resolving deltas: 22% (112/503)\r", "Resolving deltas: 23% (117/503)\r", "Resolving deltas: 24% (121/503)\r", "Resolving deltas: 25% (126/503)\r", "Resolving deltas: 26% (132/503)\r", "Resolving deltas: 27% (136/503)\r", "Resolving deltas: 28% (141/503)\r", "Resolving deltas: 29% (146/503)\r", "Resolving deltas: 30% (151/503)\r", "Resolving deltas: 31% (156/503)\r", "Resolving deltas: 32% (161/503)\r", "Resolving deltas: 33% (166/503)\r", "Resolving deltas: 34% (172/503)\r", "Resolving deltas: 35% (178/503)\r", "Resolving deltas: 36% (182/503)\r", "Resolving deltas: 37% (191/503)\r", "Resolving deltas: 40% (206/503)\r", "Resolving deltas: 42% (212/503)\r", "Resolving deltas: 43% (219/503)\r", "Resolving deltas: 44% (223/503)\r", "Resolving deltas: 45% (227/503)\r", "Resolving deltas: 46% (235/503)\r", "Resolving deltas: 47% (239/503)\r", "Resolving deltas: 48% (242/503)\r", "Resolving deltas: 49% (248/503)\r", "Resolving deltas: 50% (252/503)\r", "Resolving deltas: 51% (257/503)\r", "Resolving deltas: 52% (263/503)\r", "Resolving deltas: 53% (267/503)\r", "Resolving deltas: 54% (273/503)\r", "Resolving deltas: 56% (282/503)\r", "Resolving deltas: 58% (292/503)\r", "Resolving deltas: 59% (297/503)\r", "Resolving deltas: 60% (302/503)\r", "Resolving deltas: 61% (308/503)\r", "Resolving deltas: 62% (312/503)\r", "Resolving deltas: 63% (318/503)\r", "Resolving deltas: 64% (322/503)\r", "Resolving deltas: 65% (328/503)\r", "Resolving deltas: 66% (332/503)\r", "Resolving deltas: 67% (339/503)\r", "Resolving deltas: 68% (343/503)\r", "Resolving deltas: 69% (350/503)\r", "Resolving deltas: 70% (354/503)\r", "Resolving deltas: 71% (360/503)\r", "Resolving deltas: 72% (364/503)\r", "Resolving deltas: 74% (373/503)\r", "Resolving deltas: 75% (378/503)\r", "Resolving deltas: 76% (387/503)\r", "Resolving deltas: 77% (389/503)\r", "Resolving deltas: 78% (393/503)\r", "Resolving deltas: 79% (398/503)\r", "Resolving deltas: 80% (403/503)\r", "Resolving deltas: 81% (410/503)\r", "Resolving deltas: 82% (413/503)\r", "Resolving deltas: 83% (418/503)\r", "Resolving deltas: 84% (423/503)\r", "Resolving deltas: 85% (429/503)\r", "Resolving deltas: 86% (434/503)\r", "Resolving deltas: 87% (438/503)\r", "Resolving deltas: 88% (445/503)\r", "Resolving deltas: 89% (448/503)\r", "Resolving deltas: 90% (453/503)\r", "Resolving deltas: 91% (459/503)\r", "Resolving deltas: 92% (466/503)\r", "Resolving deltas: 94% (474/503)\r", "Resolving deltas: 95% (479/503)\r", "Resolving deltas: 96% (483/503)\r", "Resolving deltas: 97% (491/503)\r", "Resolving deltas: 98% (495/503)\r", "Resolving deltas: 99% (498/503)\r", "Resolving deltas: 100% (503/503)\r", "Resolving deltas: 100% (503/503), done.\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 56% (1576/2768)\r", "Updating files: 57% (1578/2768)\r", "Updating files: 58% (1606/2768)\r", "Updating files: 59% (1634/2768)\r", "Updating files: 60% (1661/2768)\r", "Updating files: 61% (1689/2768)\r", "Updating files: 62% (1717/2768)\r", "Updating files: 63% (1744/2768)\r", "Updating files: 64% (1772/2768)\r", "Updating files: 65% (1800/2768)\r", "Updating files: 66% (1827/2768)\r", "Updating files: 67% (1855/2768)\r", "Updating files: 68% (1883/2768)\r", "Updating files: 69% (1910/2768)\r", "Updating files: 70% (1938/2768)\r", "Updating files: 71% (1966/2768)\r", "Updating files: 72% (1993/2768)\r", "Updating files: 73% (2021/2768)\r", "Updating files: 74% (2049/2768)\r", "Updating files: 75% (2076/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 76% (2104/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 77% (2132/2768)\r", "Updating files: 78% (2160/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 79% (2187/2768)\r", "Updating files: 80% (2215/2768)\r", "Updating files: 81% (2243/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 81% (2248/2768)\r", "Updating files: 82% (2270/2768)\r", "Updating files: 83% (2298/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 84% (2326/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 85% (2353/2768)\r", "Updating files: 86% (2381/2768)\r", "Updating files: 87% (2409/2768)\r", "Updating files: 88% (2436/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 89% (2464/2768)\r", "Updating files: 90% (2492/2768)\r", "Updating files: 91% (2519/2768)\r", "Updating files: 92% (2547/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 93% (2575/2768)\r" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Updating files: 94% (2602/2768)\r", "Updating files: 95% (2630/2768)\r", "Updating files: 96% (2658/2768)\r", "Updating files: 97% (2685/2768)\r", "Updating files: 98% (2713/2768)\r", "Updating files: 99% (2741/2768)\r", "Updating files: 100% (2768/2768)\r", "Updating files: 100% (2768/2768), done.\r\n" ] } ], "source": [ "!git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models" ] }, { "cell_type": "markdown", "metadata": { "id": "wfHOhbkgvrKr" }, "source": [ "### ヘルプを読む\n", "\n", "スクリプトは TensorFlow と共にインストールされています。組み込みのヘルプは次のとおりです。" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:52.356205Z", "iopub.status.busy": "2022-12-14T22:30:52.355922Z", "iopub.status.idle": "2022-12-14T22:30:54.778496Z", "shell.execute_reply": "2022-12-14T22:30:54.777380Z" }, "id": "m2GF-tlntqTQ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:30:53.431530: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:53.431619: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:53.431630: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "usage: tf_upgrade_v2 [-h] [--infile INPUT_FILE] [--outfile OUTPUT_FILE]\r\n", " [--intree INPUT_TREE] [--outtree OUTPUT_TREE]\r\n", " [--copyotherfiles COPY_OTHER_FILES] [--inplace]\r\n", " [--no_import_rename] [--no_upgrade_compat_v1_import]\r\n", " [--reportfile REPORT_FILENAME] [--mode {DEFAULT,SAFETY}]\r\n", " [--print_all]\r\n", "\r\n", "Convert a TensorFlow Python file from 1.x to 2.0\r\n", "\r\n", "Simple usage:\r\n", " tf_upgrade_v2.py --infile foo.py --outfile bar.py\r\n", " tf_upgrade_v2.py --infile foo.ipynb --outfile bar.ipynb\r\n", " tf_upgrade_v2.py --intree ~/code/old --outtree ~/code/new\r\n", "\r\n", "optional arguments:\r\n", " -h, --help show this help message and exit\r\n", " --infile INPUT_FILE If converting a single file, the name of the file to\r\n", " convert\r\n", " --outfile OUTPUT_FILE\r\n", " If converting a single file, the output filename.\r\n", " --intree INPUT_TREE If converting a whole tree of files, the directory to\r\n", " read from (relative or absolute).\r\n", " --outtree OUTPUT_TREE\r\n", " If converting a whole tree of files, the output\r\n", " directory (relative or absolute).\r\n", " --copyotherfiles COPY_OTHER_FILES\r\n", " If converting a whole tree of files, whether to copy\r\n", " the other files.\r\n", " --inplace If converting a set of files, whether to allow the\r\n", " conversion to be performed on the input files.\r\n", " --no_import_rename Not to rename import to compat.v2 explicitly.\r\n", " --no_upgrade_compat_v1_import\r\n", " If specified, don't upgrade explicit imports of\r\n", " `tensorflow.compat.v1 as tf` to the v2 APIs.\r\n", " Otherwise, explicit imports of the form\r\n", " `tensorflow.compat.v1 as tf` will be upgraded.\r\n", " --reportfile REPORT_FILENAME\r\n", " The name of the file where the report log is\r\n", " stored.(default: report.txt)\r\n", " --mode {DEFAULT,SAFETY}\r\n", " Upgrade script mode. Supported modes: DEFAULT: Perform\r\n", " only straightforward conversions to upgrade to 2.0. In\r\n", " more difficult cases, switch to use compat.v1. SAFETY:\r\n", " Keep 1.* code intact and import compat.v1 module.\r\n", " --print_all Print full log to stdout instead of just printing\r\n", " errors\r\n" ] } ], "source": [ "!tf_upgrade_v2 -h" ] }, { "cell_type": "markdown", "metadata": { "id": "se9Leqjm1CZR" }, "source": [ "### TF1 のサンプルコード" ] }, { "cell_type": "markdown", "metadata": { "id": "whD5i36s1SuM" }, "source": [ "単純な TensorFlow 1.0 のスクリプトは次のとおりです。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:54.783151Z", "iopub.status.busy": "2022-12-14T22:30:54.782453Z", "iopub.status.idle": "2022-12-14T22:30:54.906270Z", "shell.execute_reply": "2022-12-14T22:30:54.905451Z" }, "id": "mhGbYQ9HwbeU" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " # Calculate loss using mean squared error\r\n", " average_loss = tf.losses.mean_squared_error(labels, predictions)\r\n", "\r\n", " # Pre-made estimators use the total_loss instead of the average,\r\n", " # so report total_loss for compatibility.\r\n", " batch_size = tf.shape(labels)[0]\r\n", " total_loss = tf.to_float(batch_size) * average_loss\r\n", "\r\n", " if mode == tf.estimator.ModeKeys.TRAIN:\r\n", " optimizer = params.get(\"optimizer\", tf.train.AdamOptimizer)\r\n" ] } ], "source": [ "!head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10" ] }, { "cell_type": "markdown", "metadata": { "id": "UGO7xSyL89wX" }, "source": [ "TensorFlow 2.x がインストールされている状態では動作しません。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:54.910331Z", "iopub.status.busy": "2022-12-14T22:30:54.909694Z", "iopub.status.idle": "2022-12-14T22:30:57.359268Z", "shell.execute_reply": "2022-12-14T22:30:57.358342Z" }, "id": "TD7fFphX8_qE" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:30:55.991300: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:55.991402: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:55.991421: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Traceback (most recent call last):\r\n", " File \"/tmpfs/src/temp/site/ja/guide/migrate/models/samples/cookbook/regression/custom_regression.py\", line 162, in \r\n", " tf.logging.set_verbosity(tf.logging.INFO)\r\n", "AttributeError: module 'tensorflow' has no attribute 'logging'\r\n" ] } ], "source": [ "!(cd models/samples/cookbook/regression && python custom_regression.py)" ] }, { "cell_type": "markdown", "metadata": { "id": "iZZHu0H0wLRJ" }, "source": [ "### 単一ファイル\n", "\n", "スクリプトは、単一の Python ファイルで実行できます。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:57.363764Z", "iopub.status.busy": "2022-12-14T22:30:57.363146Z", "iopub.status.idle": "2022-12-14T22:30:59.818262Z", "shell.execute_reply": "2022-12-14T22:30:59.817179Z" }, "id": "xIBZVEjkqkc5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:30:58.444740: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:58.444827: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:30:58.444838: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO line 38:8: Renamed 'tf.feature_column.input_layer' to 'tf.compat.v1.feature_column.input_layer'\r\n", "INFO line 43:10: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'\r\n", "INFO line 46:17: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'\r\n", "INFO line 57:17: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.\r\n", "INFO line 57:17: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'\r\n", "INFO line 61:15: Added keywords to args of function 'tf.shape'\r\n", "INFO line 62:15: Changed tf.to_float call to tf.cast(..., dtype=tf.float32).\r\n", "INFO line 65:40: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'\r\n", "INFO line 68:39: Renamed 'tf.train.get_global_step' to 'tf.compat.v1.train.get_global_step'\r\n", "INFO line 83:9: tf.metrics.root_mean_squared_error requires manual check. tf.metrics have been replaced with object oriented versions in TF 2.0 and after. The metric function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.\r\n", "INFO line 83:9: Renamed 'tf.metrics.root_mean_squared_error' to 'tf.compat.v1.metrics.root_mean_squared_error'\r\n", "INFO line 142:23: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'\r\n", "INFO line 162:2: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'\r\n", "INFO line 162:27: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'\r\n", "INFO line 163:2: Renamed 'tf.app.run' to 'tf.compat.v1.app.run'\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "TensorFlow 2.0 Upgrade Script\r\n", "-----------------------------\r\n", "Converted 1 files\r\n", "Detected 0 issues that require attention\r\n", "--------------------------------------------------------------------------------\r\n", "\r\n", "\r\n", "Make sure to read the detailed log 'report.txt'\r\n", "\r\n" ] } ], "source": [ "!tf_upgrade_v2 \\\n", " --infile models/samples/cookbook/regression/custom_regression.py \\\n", " --outfile /tmp/custom_regression_v2.py" ] }, { "cell_type": "markdown", "metadata": { "id": "L9X2lxzqqkc9" }, "source": [ "コードの修正策が見つからない場合、スクリプトはエラーを出力します。 " ] }, { "cell_type": "markdown", "metadata": { "id": "r7zpuE1vWSlL" }, "source": [ "### ディレクトリツリー" ] }, { "cell_type": "markdown", "metadata": { "id": "2q7Gtuu8SdIC" }, "source": [ "この単純なサンプルを含む一般的なプロジェクトでは、複数のファイルが使用されています。通常はパッケージ全体をアップグレードするため、スクリプトをディレクトリツリーに対して実行することもできます。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:30:59.822863Z", "iopub.status.busy": "2022-12-14T22:30:59.822575Z", "iopub.status.idle": "2022-12-14T22:31:02.354756Z", "shell.execute_reply": "2022-12-14T22:31:02.353887Z" }, "id": "XGqcdkAPqkc-" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:31:00.895991: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:00.896084: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:00.896094: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "WARNING line 125:15: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation.\r\n", "\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO line 58:10: tf.estimator.LinearRegressor: Default value of loss_reduction has been changed to SUM_OVER_BATCH_SIZE; inserting old default value tf.keras.losses.Reduction.SUM.\r\n", "\r\n", "INFO line 101:2: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'\r\n", "INFO line 101:27: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'\r\n", "INFO line 102:2: Renamed 'tf.app.run' to 'tf.compat.v1.app.run'\r\n", "INFO line 82:10: tf.estimator.LinearRegressor: Default value of loss_reduction has been changed to SUM_OVER_BATCH_SIZE; inserting old default value tf.keras.losses.Reduction.SUM.\r\n", "\r\n", "INFO line 105:2: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'\r\n", "INFO line 105:27: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'\r\n", "INFO line 106:2: Renamed 'tf.app.run' to 'tf.compat.v1.app.run'\r\n", "INFO line 72:10: tf.estimator.DNNRegressor: Default value of loss_reduction has been changed to SUM_OVER_BATCH_SIZE; inserting old default value tf.keras.losses.Reduction.SUM.\r\n", "\r\n", "INFO line 96:2: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'\r\n", "INFO line 96:27: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'\r\n", "INFO line 97:2: Renamed 'tf.app.run' to 'tf.compat.v1.app.run'\r\n", "INFO line 40:7: Renamed 'tf.test.mock' to 'tf.compat.v1.test.mock'\r\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "INFO line 38:8: Renamed 'tf.feature_column.input_layer' to 'tf.compat.v1.feature_column.input_layer'\r\n", "INFO line 43:10: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'\r\n", "INFO line 46:17: Renamed 'tf.layers.dense' to 'tf.compat.v1.layers.dense'\r\n", "INFO line 57:17: tf.losses.mean_squared_error requires manual check. tf.losses have been replaced with object oriented versions in TF 2.0 and after. The loss function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.\r\n", "INFO line 57:17: Renamed 'tf.losses.mean_squared_error' to 'tf.compat.v1.losses.mean_squared_error'\r\n", "INFO line 61:15: Added keywords to args of function 'tf.shape'\r\n", "INFO line 62:15: Changed tf.to_float call to tf.cast(..., dtype=tf.float32).\r\n", "INFO line 65:40: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'\r\n", "INFO line 68:39: Renamed 'tf.train.get_global_step' to 'tf.compat.v1.train.get_global_step'\r\n", "INFO line 83:9: tf.metrics.root_mean_squared_error requires manual check. tf.metrics have been replaced with object oriented versions in TF 2.0 and after. The metric function calls have been converted to compat.v1 for backward compatibility. Please update these calls to the TF 2.0 versions.\r\n", "INFO line 83:9: Renamed 'tf.metrics.root_mean_squared_error' to 'tf.compat.v1.metrics.root_mean_squared_error'\r\n", "INFO line 142:23: Renamed 'tf.train.AdamOptimizer' to 'tf.compat.v1.train.AdamOptimizer'\r\n", "INFO line 162:2: Renamed 'tf.logging.set_verbosity' to 'tf.compat.v1.logging.set_verbosity'\r\n", "INFO line 162:27: Renamed 'tf.logging.INFO' to 'tf.compat.v1.logging.INFO'\r\n", "INFO line 163:2: Renamed 'tf.app.run' to 'tf.compat.v1.app.run'\r\n", "TensorFlow 2.0 Upgrade Script\r\n", "-----------------------------\r\n", "Converted 7 files\r\n", "Detected 1 issues that require attention\r\n", "--------------------------------------------------------------------------------\r\n", "--------------------------------------------------------------------------------\r\n", "File: models/samples/cookbook/regression/automobile_data.py\r\n", "--------------------------------------------------------------------------------\r\n", "models/samples/cookbook/regression/automobile_data.py:125:15: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation.\r\n", "\r\n", "\r\n", "\r\n", "Make sure to read the detailed log 'tree_report.txt'\r\n", "\r\n" ] } ], "source": [ "# update the .py files and copy all the other files to the outtree\n", "!tf_upgrade_v2 \\\n", " --intree models/samples/cookbook/regression/ \\\n", " --outtree regression_v2/ \\\n", " --reportfile tree_report.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "2S4j7sqbSowC" }, "source": [ "dataset.make_one_shot_iterator 関数に関する注意点が一つあります。\n", "\n", "これで、スクリプトが TensorFlow 2.x で動作するようになりました。\n", "\n", "`tf.compat.v1` モジュールは TF 1.15 に含まれているため、変換されたスクリプトは TensorFlow 1.15 でも実行されることに注意してください。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:02.359441Z", "iopub.status.busy": "2022-12-14T22:31:02.358882Z", "iopub.status.idle": "2022-12-14T22:31:13.113265Z", "shell.execute_reply": "2022-12-14T22:31:13.112441Z" }, "id": "vh0cmW3y1tX9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "I1214 22:31:12.248600 140710217746240 estimator.py:2083] Saving dict for global step 1000: global_step = 1000, loss = 334.75287, rmse = 2.6408365\r\n", "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1000: /tmpfs/tmp/tmpdkgyl49e/model.ckpt-1000\r\n", "I1214 22:31:12.289374 140710217746240 estimator.py:2143] Saving 'checkpoint_path' summary for global step 1000: /tmpfs/tmp/tmpdkgyl49e/model.ckpt-1000\r\n", "Tensor(\"IteratorGetNext:25\", shape=(None,), dtype=float64, device=/device:CPU:0)\r\n", "Tensor(\"Squeeze:0\", shape=(None,), dtype=float32)\r\n", "\r\n", "********************************************************************************\r\n", "\r\n", "RMS error for the test set: $2641\r\n", "\r\n" ] } ], "source": [ "!(cd regression_v2 && python custom_regression.py 2>&1) | tail" ] }, { "cell_type": "markdown", "metadata": { "id": "4EgZGGkdqkdC" }, "source": [ "## 詳細レポート\n", "\n", "このスクリプトは、詳細な変更のリストも報告します。このサンプルでは、安全でない可能性のある変換が 1 つ検出され、ファイルの先頭で警告が表示されています。 " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:13.117580Z", "iopub.status.busy": "2022-12-14T22:31:13.116823Z", "iopub.status.idle": "2022-12-14T22:31:13.239280Z", "shell.execute_reply": "2022-12-14T22:31:13.238480Z" }, "id": "CtHaZbVaNMGV" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TensorFlow 2.0 Upgrade Script\r\n", "-----------------------------\r\n", "Converted 7 files\r\n", "Detected 1 issues that require attention\r\n", "--------------------------------------------------------------------------------\r\n", "--------------------------------------------------------------------------------\r\n", "File: models/samples/cookbook/regression/automobile_data.py\r\n", "--------------------------------------------------------------------------------\r\n", "models/samples/cookbook/regression/automobile_data.py:125:15: WARNING: Changing dataset.make_one_shot_iterator() to tf.compat.v1.data.make_one_shot_iterator(dataset). Please check this transformation.\r\n", "\r\n", "================================================================================\r\n", "Detailed log follows:\r\n", "\r\n", "================================================================================\r\n", "================================================================================\r\n", "Input tree: 'models/samples/cookbook/regression/'\r\n", "================================================================================\r\n", "--------------------------------------------------------------------------------\r\n", "Processing file 'models/samples/cookbook/regression/automobile_data.py'\r\n", " outputting to 'regression_v2/automobile_data.py'\r\n" ] } ], "source": [ "!head -n 20 tree_report.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "1-UIFXP3cFSa" }, "source": [ "再度 Dataset.make_one_shot_iterator function に関して警告が 1 つ表示されていることに注意してください。" ] }, { "cell_type": "markdown", "metadata": { "id": "oxQeYS1TN-jv" }, "source": [ "その他の場合、重要な変更の根拠が出力されます。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:13.243105Z", "iopub.status.busy": "2022-12-14T22:31:13.242472Z", "iopub.status.idle": "2022-12-14T22:31:13.247818Z", "shell.execute_reply": "2022-12-14T22:31:13.247185Z" }, "id": "WQs9kEvVN9th" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Writing dropout.py\n" ] } ], "source": [ "%%writefile dropout.py\n", "import tensorflow as tf\n", "\n", "d = tf.nn.dropout(tf.range(10), 0.2)\n", "z = tf.zeros_like(d, optimize=False)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:13.250914Z", "iopub.status.busy": "2022-12-14T22:31:13.250330Z", "iopub.status.idle": "2022-12-14T22:31:15.662660Z", "shell.execute_reply": "2022-12-14T22:31:15.661805Z" }, "id": "7uOkacZsO3XX" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:31:14.325887: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:14.325974: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:14.325985: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] } ], "source": [ "!tf_upgrade_v2 \\\n", " --infile dropout.py \\\n", " --outfile dropout_v2.py \\\n", " --reportfile dropout_report.txt > /dev/null" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:15.666698Z", "iopub.status.busy": "2022-12-14T22:31:15.666433Z", "iopub.status.idle": "2022-12-14T22:31:15.788765Z", "shell.execute_reply": "2022-12-14T22:31:15.787969Z" }, "id": "m-J82-scPMGl" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "TensorFlow 2.0 Upgrade Script\r\n", "-----------------------------\r\n", "Converted 1 files\r\n", "Detected 0 issues that require attention\r\n", "--------------------------------------------------------------------------------\r\n", "================================================================================\r\n", "Detailed log follows:\r\n", "\r\n", "================================================================================\r\n", "--------------------------------------------------------------------------------\r\n", "Processing file 'dropout.py'\r\n", " outputting to 'dropout_v2.py'\r\n", "--------------------------------------------------------------------------------\r\n", "\r\n", "3:4: INFO: Changing keep_prob arg of tf.nn.dropout to rate, and recomputing value.\r\n", "\r\n", "4:4: INFO: Renaming tf.zeros_like to tf.compat.v1.zeros_like because argument optimize is present. tf.zeros_like no longer takes an optimize argument, and behaves as if optimize=True. This call site specifies something other than optimize=True, so it was converted to compat.v1.\r\n", "--------------------------------------------------------------------------------\r\n", "\r\n" ] } ], "source": [ "!cat dropout_report.txt" ] }, { "cell_type": "markdown", "metadata": { "id": "DOOLN21nTGSS" }, "source": [ "変更されたファイルの内容は次のとおりです。スクリプトがどのように引数名を追加し、移動および名前変更された引数を処理しているかに注目してください。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:15.792534Z", "iopub.status.busy": "2022-12-14T22:31:15.791973Z", "iopub.status.idle": "2022-12-14T22:31:15.912340Z", "shell.execute_reply": "2022-12-14T22:31:15.911636Z" }, "id": "SrYcJk9-TFlU" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "import tensorflow as tf\r\n", "\r\n", "d = tf.nn.dropout(tf.range(10), rate=1 - (0.2))\r\n", "z = tf.compat.v1.zeros_like(d, optimize=False)\r\n" ] } ], "source": [ "!cat dropout_v2.py" ] }, { "cell_type": "markdown", "metadata": { "id": "wI_sVNp_b4C4" }, "source": [ "大規模なプロジェクトでは、若干のエラーが発生する可能性があります。たとえば、deeplab モデルを変換します。" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:15.916198Z", "iopub.status.busy": "2022-12-14T22:31:15.915672Z", "iopub.status.idle": "2022-12-14T22:31:19.832609Z", "shell.execute_reply": "2022-12-14T22:31:19.831696Z" }, "id": "uzuY-bOvYBS7" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:31:16.994849: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:16.994943: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:16.994954: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] } ], "source": [ "!tf_upgrade_v2 \\\n", " --intree models/research/deeplab \\\n", " --outtree deeplab_v2 \\\n", " --reportfile deeplab_report.txt > /dev/null" ] }, { "cell_type": "markdown", "metadata": { "id": "FLhw3fm8drae" }, "source": [ "次のような出力ファイルが生成されました。" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:19.837054Z", "iopub.status.busy": "2022-12-14T22:31:19.836402Z", "iopub.status.idle": "2022-12-14T22:31:19.958434Z", "shell.execute_reply": "2022-12-14T22:31:19.957671Z" }, "id": "4YYLRxWJdSvQ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "README.md\tdatasets\t input_preprocess.py train.py\r\n", "__init__.py\tdeeplab_demo.ipynb local_test.sh\t utils\r\n", "common.py\teval.py\t\t local_test_mobilenetv2.sh vis.py\r\n", "common_test.py\texport_model.py model.py\r\n", "core\t\tg3doc\t\t model_test.py\r\n" ] } ], "source": [ "!ls deeplab_v2" ] }, { "cell_type": "markdown", "metadata": { "id": "qtTC-cAZdEBy" }, "source": [ "しかし、エラーが発生していました。レポートは、実行前に修正する必要があるものを正確に把握するのに役立ちます。最初の 3 つのエラーは次のとおりです。" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:19.962128Z", "iopub.status.busy": "2022-12-14T22:31:19.961580Z", "iopub.status.idle": "2022-12-14T22:31:20.083023Z", "shell.execute_reply": "2022-12-14T22:31:20.082296Z" }, "id": "UVTNOohlcyVZ" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "models/research/deeplab/vis.py:31:7: ERROR: Using member tf.contrib.slim in deprecated module tf.contrib. tf.contrib.slim cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.\r\n", "models/research/deeplab/train.py:29:7: ERROR: Using member tf.contrib.slim in deprecated module tf.contrib. tf.contrib.slim cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.\r\n", "models/research/deeplab/model.py:60:7: ERROR: Using member tf.contrib.slim in deprecated module tf.contrib. tf.contrib.slim cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.\r\n" ] } ], "source": [ "!cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3" ] }, { "cell_type": "markdown", "metadata": { "id": "gGBeDaFVRJ5l" }, "source": [ "## \"Safety\" モード" ] }, { "cell_type": "markdown", "metadata": { "id": "BnfCxB7SVtTO" }, "source": [ "この変換スクリプトには tensorflow.compat.v1 モジュールを使用するようにインポートを変更するだけの侵襲性の低い SAFETY モードもあります。" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:20.086896Z", "iopub.status.busy": "2022-12-14T22:31:20.086251Z", "iopub.status.idle": "2022-12-14T22:31:20.206006Z", "shell.execute_reply": "2022-12-14T22:31:20.205295Z" }, "id": "XdaVXCPWQCC5" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "import tensorflow as tf\r\n", "\r\n", "d = tf.nn.dropout(tf.range(10), 0.2)\r\n", "z = tf.zeros_like(d, optimize=False)\r\n" ] } ], "source": [ "!cat dropout.py" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:20.209570Z", "iopub.status.busy": "2022-12-14T22:31:20.209097Z", "iopub.status.idle": "2022-12-14T22:31:22.639906Z", "shell.execute_reply": "2022-12-14T22:31:22.638804Z" }, "id": "c0tvRJLGRYEb" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2022-12-14 22:31:21.280478: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:21.280568: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n", "2022-12-14 22:31:21.280578: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\n" ] } ], "source": [ "!tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "execution": { "iopub.execute_input": "2022-12-14T22:31:22.643872Z", "iopub.status.busy": "2022-12-14T22:31:22.643604Z", "iopub.status.idle": "2022-12-14T22:31:22.765892Z", "shell.execute_reply": "2022-12-14T22:31:22.765157Z" }, "id": "91suN2RaRfIV" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "import tensorflow.compat.v1 as tf\r\n", "\r\n", "d = tf.nn.dropout(tf.range(10), 0.2)\r\n", "z = tf.zeros_like(d, optimize=False)\r\n" ] } ], "source": [ "!cat dropout_v2_safe.py" ] }, { "cell_type": "markdown", "metadata": { "id": "EOzTF7xbZqqW" }, "source": [ "ご覧のとおり、これはコードをアップグレードしませんが、TensorFlow 1 コードを TensorFlow 2 バイナリに対して実行できるようにします。これは、コードがサポートされている TF 2.x の動作を実行しているという意味ではないことに注意してください。" ] }, { "cell_type": "markdown", "metadata": { "id": "jGfXVApkqkdG" }, "source": [ "## 注意事項\n", "\n", "- このスクリプトを実行する前に手動でコードの一部をアップデートしないでください。特に、`tf.math.argmax` や `tf.batch_to_space` などの引数の順序が変更された関数により、既存コードを誤ってマッピングするキーワード引数が不正に追加されてしまいます。\n", "\n", "- このスクリプトは、`tensorflow` が `import tensorflow as tf` または `import tensorflow.compat.v1 as tf` を使用してインポートされていることを前提としています。\n", "\n", "- このスクリプトは引数の順序を変更しません。その代わり、キーワード引数を引数の順序が変更された関数に追加します。\n", "\n", "- GitHub リポジトリ内の Jupyter ノートブックと Python ファイルをアップグレードするための便利なツールについては、[tf2up.ml](https://github.com/lc0/tf2up) をチェックしてください。\n", "\n", "アップグレードスクリプトのバグの報告、および、機能リクエストがありましたら、[GitHub](https://github.com/tensorflow/tensorflow/issues) で課題を作成してください。" ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "upgrade.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.16" } }, "nbformat": 4, "nbformat_minor": 0 }