{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "CGyzr0tfeUTQ" }, "source": [ "**Copyright 2021 The TensorFlow Hub Authors.**\n", "\n", "Licensed under the Apache License, Version 2.0 (the \"License\");" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zV1OQAGReaGQ" }, "outputs": [], "source": [ "# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "# ==============================================================================" ] }, { "cell_type": "markdown", "metadata": { "id": "L5bsDhkRfTpq" }, "source": [ "\n", " \n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View on GitHub\n", " \n", " Download notebook\n", " \n", " See TF Hub model\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "owWqOcw1e-RZ" }, "source": [ "# Universal Sentence Encoder SentEval demo\n", "This colab demostrates the [Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1) using the [SentEval](https://github.com/facebookresearch/SentEval) toolkit, which is a library for measuring the quality of sentence embeddings. The SentEval toolkit includes a diverse set of downstream tasks that are able to evaluate the generalization power of an embedding model and to evaluate the linguistic properties encoded.\n", "\n", "Run the first two code blocks to setup the environment, in the third code block you can pick a SentEval task to evaluate the model. A GPU runtime is recommended to run this Colab.\n", "\n", "To learn more about the Universal Sentence Encoder CMLM model, see https://openreview.net/forum?id=WDVD4lUCTzU." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "-CerULCLsjzV" }, "outputs": [], "source": [ "#@title Install dependencies\n", "!pip install --quiet \"tensorflow-text==2.11.*\"\n", "!pip install --quiet torch==1.8.1" ] }, { "cell_type": "markdown", "metadata": { "id": "LjqkqD6aiZGU" }, "source": [ "## Download SentEval and task data\n", "This step download SentEval from github and execute the data script to download the task data. It may take up to 5 minutes to complete." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3UwhHQiKJmSc" }, "outputs": [], "source": [ "#@title Install SentEval and download task data\n", "!rm -rf ./SentEval\n", "!git clone https://github.com/facebookresearch/SentEval.git\n", "!cd $PWD/SentEval/data/downstream && bash get_transfer_data.bash > /dev/null 2>&1" ] }, { "cell_type": "markdown", "metadata": { "id": "7a2ohPn8vMe2" }, "source": [ "#Execute a SentEval evaluation task\n", "The following code block executes a SentEval task and output the results, choose one of the following tasks to evaluate the USE CMLM model:\n", "\n", "```\n", "MR\tCR\tSUBJ\tMPQA\tSST\tTREC\tMRPC\tSICK-E\n", "```\n", "\n", "Select a model, params and task to run. The rapid prototyping params can be used for reducing computation time for faster result.\n", "\n", "It typically takes 5-15 mins to complete a task with the **'rapid prototyping'** params and up to an hour with the **'slower, best performance'** params.\n", "\n", "```\n", "params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}\n", "params['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,\n", " 'tenacity': 3, 'epoch_size': 2}\n", "```\n", "\n", "For better result, use the slower **'slower, best performance'** params, computation may take up to 1 hour:\n", "\n", "```\n", "params = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}\n", "params['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,\n", " 'tenacity': 5, 'epoch_size': 6}\n", "```\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nenCcawjwowt" }, "outputs": [], "source": [ "import os\n", "os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'\n", "\n", "import sys\n", "sys.path.append(f'{os.getcwd()}/SentEval')\n", "\n", "import tensorflow as tf\n", "\n", "# Prevent TF from claiming all GPU memory so there is some left for pytorch.\n", "gpus = tf.config.list_physical_devices('GPU')\n", "if gpus:\n", " # Memory growth needs to be the same across GPUs.\n", " for gpu in gpus:\n", " tf.config.experimental.set_memory_growth(gpu, True)\n", "\n", "import tensorflow_hub as hub\n", "import tensorflow_text\n", "import senteval\n", "import time\n", "\n", "PATH_TO_DATA = f'{os.getcwd()}/SentEval/data'\n", "MODEL = 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1' #@param ['https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1', 'https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-large/1']\n", "PARAMS = 'rapid prototyping' #@param ['slower, best performance', 'rapid prototyping']\n", "TASK = 'CR' #@param ['CR','MR', 'MPQA', 'MRPC', 'SICKEntailment', 'SNLI', 'SST2', 'SUBJ', 'TREC']\n", "\n", "params_prototyping = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 5}\n", "params_prototyping['classifier'] = {'nhid': 0, 'optim': 'rmsprop', 'batch_size': 128,\n", " 'tenacity': 3, 'epoch_size': 2}\n", "\n", "params_best = {'task_path': PATH_TO_DATA, 'usepytorch': True, 'kfold': 10}\n", "params_best['classifier'] = {'nhid': 0, 'optim': 'adam', 'batch_size': 16,\n", " 'tenacity': 5, 'epoch_size': 6}\n", "\n", "params = params_best if PARAMS == 'slower, best performance' else params_prototyping\n", "\n", "preprocessor = hub.KerasLayer(\n", " \"https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3\")\n", "encoder = hub.KerasLayer(\n", " \"https://tfhub.dev/google/universal-sentence-encoder-cmlm/en-base/1\")\n", "\n", "inputs = tf.keras.Input(shape=tf.shape(''), dtype=tf.string)\n", "outputs = encoder(preprocessor(inputs))\n", "\n", "model = tf.keras.Model(inputs=inputs, outputs=outputs)\n", "\n", "def prepare(params, samples):\n", " return\n", "\n", "def batcher(_, batch):\n", " batch = [' '.join(sent) if sent else '.' for sent in batch]\n", " return model.predict(tf.constant(batch))[\"default\"]\n", "\n", "\n", "se = senteval.engine.SE(params, batcher, prepare)\n", "print(\"Evaluating task %s with %s parameters\" % (TASK, PARAMS))\n", "start = time.time()\n", "results = se.eval(TASK)\n", "end = time.time()\n", "print('Time took on task %s : %.1f. seconds' % (TASK, end - start))\n", "print(results)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "SNvsY6Hsvs0_" }, "source": [ "#Learn More\n", "\n", "* Find more text embedding models on [TensorFlow Hub](https://tfhub.dev)\n", "* See also the [Multilingual Universal Sentence Encoder CMLM model](https://tfhub.dev/google/universal-sentence-encoder-cmlm/multilingual-base-br/1)\n", "* Check out other [Universal Sentence Encoder models](https://tfhub.dev/google/collections/universal-sentence-encoder/1)\n", "\n", "## Reference\n", "\n", "* Ziyi Yang, Yinfei Yang, Daniel Cer, Jax Law, Eric Darve. [Universal Sentence Representations Learning with Conditional Masked Language Model. November 2020](https://openreview.net/forum?id=WDVD4lUCTzU)\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "senteval_for_universal_sentence_encoder_cmlm.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }