Multilingual Universal Sentence Encoder Q&A Retrieval

View on View on GitHub Download notebook See TF Hub models

This is a demo for using Univeral Encoder Multilingual Q&A model for question-answer retrieval of text, illustrating the use of question_encoder and response_encoder of the model. We use sentences from SQuAD paragraphs as the demo dataset, each sentence and its context (the text surrounding the sentence) is encoded into high dimension embeddings with the response_encoder. These embeddings are stored in an index built using the simpleneighbors library for question-answer retrieval.

On retrieval a random question is selected from the SQuAD dataset and encoded into high dimension embedding with the question_encoder and query the simpleneighbors index returning a list of approximate nearest neighbors in semantic space.

More models

You can find all currently hosted text embedding models here and all models that have been trained on SQuAD as well here.


Setup Environment

# Install the latest Tensorflow version.
!pip install -q tensorflow_text
!pip install -q simpleneighbors[annoy]
!pip install -q nltk
!pip install -q tqdm

Setup common imports and functions

[nltk_data] Downloading package punkt to /home/kbuilder/nltk_data...
[nltk_data]   Unzipping tokenizers/

Run the following code block to download and extract the SQuAD dataset into:

  • sentences is a list of (text, context) tuples - each paragraph from the SQuAD dataset are splitted into sentences using nltk library and the sentence and paragraph text forms the (text, context) tuple.
  • questions is a list of (question, answer) tuples.

Download and extract SQuAD data

squad_url = ''

squad_json = download_squad(squad_url)
sentences = extract_sentences_from_squad_json(squad_json)
questions = extract_questions_from_squad_json(squad_json)
print("%s sentences, %s questions extracted from SQuAD %s" % (len(sentences), len(questions), squad_url))

print("\nExample sentence and context:\n")
sentence = random.choice(sentences)
10455 sentences, 10552 questions extracted from SQuAD

Example sentence and context:


('The Reverend Alexander Dyce was another benefactor of the library, leaving '
 'over 14,000 books to the museum in 1869.')


('One of the great treasures in the library is the Codex Forster, some of '
 "Leonardo da Vinci's note books. The Codex consists of three parchment-bound "
 'manuscripts, Forster I, Forster II, and Forster III, quite small in size, '
 'dated between 1490 and 1505. Their contents include a large collection of '
 'sketches and references to the equestrian sculpture commissioned by the Duke '
 'of Milan Ludovico Sforza to commemorate his father Francesco Sforza. These '
 'were bequeathed with over 18,000 books to the museum in 1876 by John '
 'Forster. The Reverend Alexander Dyce was another benefactor of the library, '
 'leaving over 14,000 books to the museum in 1869. Amongst the books he '
 'collected are early editions in Greek and Latin of the poets and playwrights '
 'Aeschylus, Aristotle, Homer, Livy, Ovid, Pindar, Sophocles and Virgil. More '
 'recent authors include Giovanni Boccaccio, Dante, Racine, Rabelais and '

The following code block setup the tensorflow graph g and session with the Univeral Encoder Multilingual Q&A model's question_encoder and response_encoder signatures.

Load model from tensorflow hub

The following code block compute the embeddings for all the text, context tuples and store them in a simpleneighbors index using the response_encoder.

Compute embeddings and build simpleneighbors index

Computing embeddings for 10455 sentences

HBox(children=(HTML(value=''), FloatProgress(value=0.0, max=104.0), HTML(value='')))

simpleneighbors index for 10455 sentences built.

On retrieval, the question is encoded using the question_encoder and the question embedding is used to query the simpleneighbors index.

Retrieve nearest neighbors for a random question from SQuAD

num_results = 25

query = random.choice(questions)
display_nearest_neighbors(query[0], query[1])