Cross-Lingual Similarity and Semantic Search Engine with Multilingual Universal Sentence Encoder

View on View on GitHub Download notebook See TF Hub model

This notebook illustrates how to access the Multilingual Universal Sentence Encoder module and use it for sentence similarity across multiple languages. This module is an extension of the original Universal Encoder module.

The notebook is divided as follows:

  • The first section shows a visualization of sentences between pair of languages. This is a more academic exercise.
  • In the second section, we show how to build a semantic search engine from a sample of a Wikipedia corpus in multiple languages.


Research papers that make use of the models explored in this colab should cite:

Multilingual universal sentence encoder for semantic retrieval

Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. arXiv preprint arXiv:1907.04307


This section sets up the environment for access to the Multilingual Universal Sentence Encoder Module and also prepares a set of English sentences and their translations. In the following sections, the multilingual module will be used to compute similarity across languages.

Setup Environment

# Install the latest Tensorflow version.
!pip install tensorflow_text
!pip install bokeh
!pip install simpleneighbors[annoy]
!pip install tqdm

Setup common imports and functions

import bokeh
import bokeh.models
import bokeh.plotting
import numpy as np
import os
import pandas as pd
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
from tensorflow_text import SentencepieceTokenizer
import sklearn.metrics.pairwise

from simpleneighbors import SimpleNeighbors
from tqdm import tqdm
from tqdm import trange

def visualize_similarity(embeddings_1, embeddings_2, labels_1, labels_2,
                         plot_width=1200, plot_height=600,
                         xaxis_font_size='12pt', yaxis_font_size='12pt'):

  assert len(embeddings_1) == len(labels_1)
  assert len(embeddings_2) == len(labels_2)

  # arccos based text similarity (Yang et al. 2019; Cer et al. 2019)
  sim = 1 - np.arccos(

  embeddings_1_col, embeddings_2_col, sim_col = [], [], []
  for i in range(len(embeddings_1)):
    for j in range(len(embeddings_2)):
  df = pd.DataFrame(zip(embeddings_1_col, embeddings_2_col, sim_col),
                    columns=['embeddings_1', 'embeddings_2', 'sim'])

  mapper = bokeh.models.LinearColorMapper(
      palette=[*reversed(bokeh.palettes.YlOrRd[9])], low=df.sim.min(),

  p = bokeh.plotting.figure(title=plot_title, x_range=labels_1,
                            plot_width=plot_width, plot_height=plot_height,
                            tools="save",toolbar_location='below', tooltips=[
                                ('pair', '@embeddings_1 ||| @embeddings_2'),
                                ('sim', '@sim')])
  p.rect(x="embeddings_1", y="embeddings_2", width=1, height=1, source=df,
         fill_color={'field': 'sim', 'transform': mapper}, line_color=None)

  p.title.text_font_size = '12pt'
  p.axis.axis_line_color = None
  p.axis.major_tick_line_color = None
  p.axis.major_label_standoff = 16
  p.xaxis.major_label_text_font_size = xaxis_font_size
  p.xaxis.major_label_orientation = 0.25 * np.pi
  p.yaxis.major_label_text_font_size = yaxis_font_size
  p.min_border_right = 300

This is additional boilerplate code where we import the pre-trained ML model we will use to encode text throughout this notebook.

# The 16-language multilingual module is the default but feel free
# to pick others from the list and compare the results.
module_url = ''

model = hub.load(module_url)

def embed_text(input):
  return model(input)

Visualize Text Similarity Between Languages

With the sentence embeddings now in hand, we can visualize semantic similarity across different languages.

Computing Text Embeddings

We first define a set of sentences translated to various languages in parallel. Then, we precompute the embeddings for all of our sentences.

# Some texts of different lengths in different languages.
arabic_sentences = ['كلب', 'الجراء لطيفة.', 'أستمتع بالمشي لمسافات طويلة على طول الشاطئ مع كلبي.']
chinese_sentences = ['狗', '小狗很好。', '我喜欢和我的狗一起沿着海滩散步。']
english_sentences = ['dog', 'Puppies are nice.', 'I enjoy taking long walks along the beach with my dog.']
french_sentences = ['chien', 'Les chiots sont gentils.', 'J\'aime faire de longues promenades sur la plage avec mon chien.']
german_sentences = ['Hund', 'Welpen sind nett.', 'Ich genieße lange Spaziergänge am Strand entlang mit meinem Hund.']
italian_sentences = ['cane', 'I cuccioli sono carini.', 'Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.']
japanese_sentences = ['犬', '子犬はいいです', '私は犬と一緒にビーチを散歩するのが好きです']
korean_sentences = ['개', '강아지가 좋다.', '나는 나의 산책을 해변을 따라 길게 산책하는 것을 즐긴다.']
russian_sentences = ['собака', 'Милые щенки.', 'Мне нравится подолгу гулять по пляжу со своей собакой.']
spanish_sentences = ['perro', 'Los cachorros son agradables.', 'Disfruto de dar largos paseos por la playa con mi perro.']

# Multilingual example
multilingual_example = ["Willkommen zu einfachen, aber", "verrassend krachtige", "multilingüe", "compréhension du langage naturel", "модели.", "大家是什么意思" , "보다 중요한", ".اللغة التي يتحدثونها"]
multilingual_example_in_en =  ["Welcome to simple yet", "surprisingly powerful", "multilingual", "natural language understanding", "models.", "What people mean", "matters more than", "the language they speak."]
# Compute embeddings.
ar_result = embed_text(arabic_sentences)
en_result = embed_text(english_sentences)
es_result = embed_text(spanish_sentences)
de_result = embed_text(german_sentences)
fr_result = embed_text(french_sentences)
it_result = embed_text(italian_sentences)
ja_result = embed_text(japanese_sentences)
ko_result = embed_text(korean_sentences)
ru_result = embed_text(russian_sentences)
zh_result = embed_text(chinese_sentences)

multilingual_result = embed_text(multilingual_example)
multilingual_in_en_result = embed_text(multilingual_example_in_en)

Visualizing Similarity

With text embeddings in hand, we can take their dot-product to visualize how similar sentences are between languages. A darker color indicates the embeddings are semantically similar.

Multilingual Similarity

visualize_similarity(multilingual_in_en_result, multilingual_result,
                     multilingual_example_in_en, multilingual_example,  "Multilingual Universal Sentence Encoder for Semantic Retrieval (Yang et al., 2019)")

English-Arabic Similarity

visualize_similarity(en_result, ar_result, english_sentences, arabic_sentences, 'English-Arabic Similarity')

Engish-Russian Similarity

visualize_similarity(en_result, ru_result, english_sentences, russian_sentences, 'English-Russian Similarity')

English-Spanish Similarity

visualize_similarity(en_result, es_result, english_sentences, spanish_sentences, 'English-Spanish Similarity')

English-Italian Similarity

visualize_similarity(en_result, it_result, english_sentences, italian_sentences, 'English-Italian Similarity')

Italian-Spanish Similarity

visualize_similarity(it_result, es_result, italian_sentences, spanish_sentences, 'Italian-Spanish Similarity')

English-Chinese Similarity

visualize_similarity(en_result, zh_result, english_sentences, chinese_sentences, 'English-Chinese Similarity')

English-Korean Similarity

visualize_similarity(en_result, ko_result, english_sentences, korean_sentences, 'English-Korean Similarity')

Chinese-Korean Similarity

visualize_similarity(zh_result, ko_result, chinese_sentences, korean_sentences, 'Chinese-Korean Similarity')

And more...

The above examples can be extended to any language pair from English, Arabic, Chinese, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Thai and Turkish. Happy coding!

Creating a Multilingual Semantic-Similarity Search Engine

Whereas in the previous example we visualized a handful of sentences, in this section we will build a semantic-search index of about 200,000 sentences from a Wikipedia Corpus. About half will be in English and the other half in Spanish to demonstrate the multilingual capabilities of the Universal Sentence Encoder.

Download Data to Index

First, we will download news sentences in multiples languages from the News Commentary Corpus [1]. Without loss of generality, this approach should also work for indexing the rest of the supported languages.

To speed up the demo, we limit to 1000 sentences per language.

corpus_metadata = [
    ('ar', '', '', 'Arabic'),
    ('zh', '', 'News-Commentary.en-zh.zh', 'Chinese'),
    ('en', '', 'News-Commentary.en-es.en', 'English'),
    ('ru', '', '', 'Russian'),
    ('es', '', '', 'Spanish'),

language_to_sentences = {}
language_to_news_path = {}
for language_code, zip_file, news_file, language_name in corpus_metadata:
  zip_path = tf.keras.utils.get_file(
      origin='' + zip_file,
  news_path = os.path.join(os.path.dirname(zip_path), news_file)
  language_to_sentences[language_code] = pd.read_csv(news_path, sep='\t', header=None)[0][:1000]
  language_to_news_path[language_code] = news_path

  print('{:,} {} sentences'.format(len(language_to_sentences[language_code]), language_name))
Downloading data from
24715264/24714354 [==============================] - 2s 0us/step
1,000 Arabic sentences
Downloading data from
18104320/18101984 [==============================] - 2s 0us/step
1,000 Chinese sentences
Downloading data from
28106752/28106064 [==============================] - 2s 0us/step
1,000 English sentences
Downloading data from
24854528/24849511 [==============================] - 2s 0us/step
1,000 Russian sentences
1,000 Spanish sentences

Using a pre-trained model to transform sentences into vectors

We compute embeddings in batches so that they fit in the GPU's RAM.

# Takes about 3 minutes

batch_size = 2048
language_to_embeddings = {}
for language_code, zip_file, news_file, language_name in corpus_metadata:
  print('\nComputing {} embeddings'.format(language_name))
  with tqdm(total=len(language_to_sentences[language_code])) as pbar:
    for batch in pd.read_csv(language_to_news_path[language_code], sep='\t',header=None, chunksize=batch_size):
      language_to_embeddings.setdefault(language_code, []).extend(embed_text(batch[0]))
  0%|          | 0/1000 [00:00<?, ?it/s]

Computing Arabic embeddings

83178it [00:30, 2768.60it/s]
  0%|          | 0/1000 [00:00<?, ?it/s]

Computing Chinese embeddings

69206it [00:18, 3664.60it/s]
  0%|          | 0/1000 [00:00<?, ?it/s]

Computing English embeddings

238853it [00:37, 6319.00it/s]
  0%|          | 0/1000 [00:00<?, ?it/s]

Computing Russian embeddings

190092it [00:34, 5589.16it/s]
  0%|          | 0/1000 [00:00<?, ?it/s]

Computing Spanish embeddings

238819it [00:41, 5754.02it/s]

Building an index of semantic vectors

We use the SimpleNeighbors library---which is a wrapper for the Annoy library---to efficiently look up results from the corpus.


# Takes about 8 minutes

num_index_trees = 40
language_name_to_index = {}
embedding_dimensions = len(list(language_to_embeddings.values())[0][0])
for language_code, zip_file, news_file, language_name in corpus_metadata:
  print('\nAdding {} embeddings to index'.format(language_name))
  index = SimpleNeighbors(embedding_dimensions, metric='dot')

  for i in trange(len(language_to_sentences[language_code])):
    index.add_one(language_to_sentences[language_code][i], language_to_embeddings[language_code][i])

  print('Building {} index with {} trees...'.format(language_name, num_index_trees))
  language_name_to_index[language_name] = index
  0%|          | 1/1000 [00:00<02:21,  7.04it/s]

Adding Arabic embeddings to index

100%|██████████| 1000/1000 [02:06<00:00,  7.90it/s]
  0%|          | 1/1000 [00:00<01:53,  8.84it/s]
Building Arabic index with 40 trees...

Adding Chinese embeddings to index

100%|██████████| 1000/1000 [02:05<00:00,  7.99it/s]
  0%|          | 1/1000 [00:00<01:59,  8.39it/s]
Building Chinese index with 40 trees...

Adding English embeddings to index

100%|██████████| 1000/1000 [02:07<00:00,  7.86it/s]
  0%|          | 1/1000 [00:00<02:17,  7.26it/s]
Building English index with 40 trees...

Adding Russian embeddings to index

100%|██████████| 1000/1000 [02:06<00:00,  7.91it/s]
  0%|          | 1/1000 [00:00<02:03,  8.06it/s]
Building Russian index with 40 trees...

Adding Spanish embeddings to index

100%|██████████| 1000/1000 [02:07<00:00,  7.84it/s]
Building Spanish index with 40 trees...
CPU times: user 11min 21s, sys: 2min 14s, total: 13min 35s
Wall time: 10min 33s


# Takes about 13 minutes

num_index_trees = 60
print('Computing mixed-language index')
combined_index = SimpleNeighbors(embedding_dimensions, metric='dot')
for language_code, zip_file, news_file, language_name in corpus_metadata:
  print('Adding {} embeddings to mixed-language index'.format(language_name))
  for i in trange(len(language_to_sentences[language_code])):
    annotated_sentence = '({}) {}'.format(language_name, language_to_sentences[language_code][i])
    combined_index.add_one(annotated_sentence, language_to_embeddings[language_code][i])

print('Building mixed-language index with {} trees...'.format(num_index_trees))
  0%|          | 1/1000 [00:00<02:00,  8.29it/s]
Computing mixed-language index
Adding Arabic embeddings to mixed-language index

100%|██████████| 1000/1000 [02:06<00:00,  7.92it/s]
  0%|          | 1/1000 [00:00<02:24,  6.89it/s]
Adding Chinese embeddings to mixed-language index

100%|██████████| 1000/1000 [02:05<00:00,  7.95it/s]
  0%|          | 1/1000 [00:00<02:05,  7.98it/s]
Adding English embeddings to mixed-language index

100%|██████████| 1000/1000 [02:06<00:00,  7.88it/s]
  0%|          | 1/1000 [00:00<02:18,  7.20it/s]
Adding Russian embeddings to mixed-language index

100%|██████████| 1000/1000 [02:04<00:00,  8.03it/s]
  0%|          | 1/1000 [00:00<02:17,  7.28it/s]
Adding Spanish embeddings to mixed-language index

100%|██████████| 1000/1000 [02:06<00:00,  7.90it/s]

Building mixed-language index with 60 trees...
CPU times: user 11min 18s, sys: 2min 13s, total: 13min 32s
Wall time: 10min 30s

Verify that the semantic-similarity search engine works

In this section we will demonstrate:

  1. Semantic-search capabilities: retrieving sentences from the corpus that are semantically similar to the given query.
  2. Multilingual capabilities: doing so in multiple languages when they query language and index language match
  3. Cross-lingual capabilities: issuing queries in a distinct language than the indexed corpus
  4. Mixed-language corpus: all of the above on a single index containing entries from all languages

Semantic-search crosss-lingual capabilities

In this section we show how to retrieve sentences related to a set of sample English sentences. Things to try:

  • Try a few different sample sentences
  • Try changing the number of returned results (they are returned in order of similarity)
  • Try cross-lingual capabilities by returning results in different languages (might want to use Google Translate on some results to your native language for sanity check)

sample_query = 'The stock market fell four points.' 
index_language = 'English' 
num_results = 10 

query_embedding = embed_text(sample_query)[0]
search_results = language_name_to_index[index_language].nearest(query_embedding, n=num_results)

print('{} sentences similar to: "{}"\n'.format(index_language, sample_query))