使用近似最近邻和文本嵌入向量构建语义搜索

在 TensorFlow.org 上查看 在 Google Colab 运行 在 GitHub 上查看源代码 下载笔记本 查看 TF Hub 模型

本教程演示了如何在给定输入数据的情况下,从 TensorFlow Hub (TF-Hub) 模块生成嵌入向量,并使用提取的嵌入向量构建近似最近邻 (ANN) 索引。之后,可以将该索引用于实时相似度匹配和检索。

在处理包含大量数据的语料库时,通过扫描整个存储库实时查找与给定查询最相似的条目来执行精确匹配的效率不高。因此,我们使用一种近似相似度匹配算法。利用这种算法,我们在查找精确的最近邻匹配时会牺牲一点准确率,但是可以显著提高速度。

在本教程中,我们将展示一个示例,在新闻标题语料库上进行实时文本搜索,以查找与查询最相似的标题。与关键字搜索不同,此过程会捕获在文本嵌入向量中编码的语义相似度。

本教程的步骤如下:

  1. 下载示例数据。
  2. 使用 TF-Hub 模型为数据生成嵌入向量
  3. 为嵌入向量构建 ANN 索引
  4. 使用索引进行相似度匹配

我们使用 Apache Beam 从 TF-Hub 模型生成嵌入向量。此外,我们还使用 Spotify 的 ANNOY 库来构建近似最近邻索引。

更多模型

对于具有相同架构,但使用不同的语言进行训练的模型,请参考集合。在这里可以找到 tfhub.dev 上当前托管的所有文本嵌入向量。

安装

安装所需的库。

pip install apache_beam
pip install 'scikit_learn~=0.23.0'  # For gaussian_random_matrix.
pip install annoy

导入所需的库

import os
import sys
import pickle
from collections import namedtuple
from datetime import datetime
import numpy as np
import apache_beam as beam
from apache_beam.transforms import util
import tensorflow as tf
import tensorflow_hub as hub
import annoy
from sklearn.random_projection import gaussian_random_matrix
2022-12-14 21:52:16.005809: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-14 21:52:16.005909: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-14 21:52:16.005919: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
print('TF version: {}'.format(tf.__version__))
print('TF-Hub version: {}'.format(hub.__version__))
print('Apache Beam version: {}'.format(beam.__version__))
TF version: 2.11.0
TF-Hub version: 0.12.0
Apache Beam version: 2.43.0

1. 下载示例数据

A Million News Headlines 数据集包含著名的澳大利亚广播公司 (ABC) 在 15 年内发布的新闻标题。此新闻数据集汇总了从 2003 年初至 2017 年底在全球范围内发生的重大事件的历史记录,其中对澳大利亚的关注更为细致。

格式:以制表符分隔的两列数据:1) 发布日期和 2) 标题文本。我们只对标题文本感兴趣。

wget 'https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true' -O raw.tsv
wc -l raw.tsv
head raw.tsv
--2022-12-14 21:52:17--  https://dataverse.harvard.edu/api/access/datafile/3450625?format=tab&gbrecs=true
Resolving dataverse.harvard.edu (dataverse.harvard.edu)... 52.54.15.150, 44.213.44.146, 52.23.87.139
Connecting to dataverse.harvard.edu (dataverse.harvard.edu)|52.54.15.150|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 57600231 (55M) [text/tab-separated-values]
Saving to: ‘raw.tsv’

raw.tsv             100%[===================>]  54.93M  68.6MB/s    in 0.8s    

2022-12-14 21:52:18 (68.6 MB/s) - ‘raw.tsv’ saved [57600231/57600231]

1103664 raw.tsv
publish_date    headline_text
20030219    "aba decides against community broadcasting licence"
20030219    "act fire witnesses must be aware of defamation"
20030219    "a g calls for infrastructure protection summit"
20030219    "air nz staff in aust strike for pay rise"
20030219    "air nz strike to affect australian travellers"
20030219    "ambitious olsson wins triple jump"
20030219    "antic delighted with record breaking barca"
20030219    "aussie qualifier stosur wastes four memphis match"
20030219    "aust addresses un security council over iraq"

为简单起见,我们仅保留标题文本并移除发布日期。

!rm -r corpus
!mkdir corpus

with open('corpus/text.txt', 'w') as out_file:
  with open('raw.tsv', 'r') as in_file:
    for line in in_file:
      headline = line.split('\t')[1].strip().strip('"')
      out_file.write(headline+"\n")
rm: cannot remove 'corpus': No such file or directory
tail corpus/text.txt
severe storms forecast for nye in south east queensland
snake catcher pleads for people not to kill reptiles
south australia prepares for party to welcome new year
strikers cool off the heat with big win in adelaide
stunning images from the sydney to hobart yacht
the ashes smiths warners near miss liven up boxing day test
timelapse: brisbanes new year fireworks
what 2017 meant to the kids of australia
what the papodopoulos meeting may mean for ausus
who is george papadopoulos the former trump campaign aide

2. 为数据生成嵌入向量。

在本教程中,我们使用神经网络语言模型 (NNLM) 为标题数据生成嵌入向量。之后,可以轻松地使用句子嵌入向量计算句子级别的含义相似度。我们使用 Apache Beam 来运行嵌入向量生成过程。

嵌入向量提取方法

embed_fn = None

def generate_embeddings(text, model_url, random_projection_matrix=None):
  # Beam will run this function in different processes that need to
  # import hub and load embed_fn (if not previously loaded)
  global embed_fn
  if embed_fn is None:
    embed_fn = hub.load(model_url)
  embedding = embed_fn(text).numpy()
  if random_projection_matrix is not None:
    embedding = embedding.dot(random_projection_matrix)
  return text, embedding

转换为 tf.Example 方法

def to_tf_example(entries):
  examples = []

  text_list, embedding_list = entries
  for i in range(len(text_list)):
    text = text_list[i]
    embedding = embedding_list[i]

    features = {
        'text': tf.train.Feature(
            bytes_list=tf.train.BytesList(value=[text.encode('utf-8')])),
        'embedding': tf.train.Feature(
            float_list=tf.train.FloatList(value=embedding.tolist()))
    }

    example = tf.train.Example(
        features=tf.train.Features(
            feature=features)).SerializeToString(deterministic=True)

    examples.append(example)

  return examples

Beam 流水线

def run_hub2emb(args):
  '''Runs the embedding generation pipeline'''

  options = beam.options.pipeline_options.PipelineOptions(**args)
  args = namedtuple("options", args.keys())(*args.values())

  with beam.Pipeline(args.runner, options=options) as pipeline:
    (
        pipeline
        | 'Read sentences from files' >> beam.io.ReadFromText(
            file_pattern=args.data_dir)
        | 'Batch elements' >> util.BatchElements(
            min_batch_size=args.batch_size, max_batch_size=args.batch_size)
        | 'Generate embeddings' >> beam.Map(
            generate_embeddings, args.model_url, args.random_projection_matrix)
        | 'Encode to tf example' >> beam.FlatMap(to_tf_example)
        | 'Write to TFRecords files' >> beam.io.WriteToTFRecord(
            file_path_prefix='{}/emb'.format(args.output_dir),
            file_name_suffix='.tfrecords')
    )

生成随机投影权重矩阵

随机投影是一种简单而强大的技术,用于降低位于欧几里得空间中的一组点的维数。有关理论背景,请参阅约翰逊-林登斯特劳斯引理

利用随机投影降低嵌入向量的维数,这样,构建和查询 ANN 索引需要的时间将减少。

在本教程中,我们使用 Scikit-learn 库中的高斯随机投影

def generate_random_projection_weights(original_dim, projected_dim):
  random_projection_matrix = None
  random_projection_matrix = gaussian_random_matrix(
      n_components=projected_dim, n_features=original_dim).T
  print("A Gaussian random weight matrix was creates with shape of {}".format(random_projection_matrix.shape))
  print('Storing random projection matrix to disk...')
  with open('random_projection_matrix', 'wb') as handle:
    pickle.dump(random_projection_matrix, 
                handle, protocol=pickle.HIGHEST_PROTOCOL)

  return random_projection_matrix

设置参数

如果要使用原始嵌入向量空间构建索引而不进行随机投影,请将 projected_dim 参数设置为 None。请注意,这会减慢高维嵌入的索引编制步骤。

运行流水线

import tempfile

output_dir = tempfile.mkdtemp()
original_dim = hub.load(model_url)(['']).shape[1]
random_projection_matrix = None

if projected_dim:
  random_projection_matrix = generate_random_projection_weights(
      original_dim, projected_dim)

args = {
    'job_name': 'hub2emb-{}'.format(datetime.utcnow().strftime('%y%m%d-%H%M%S')),
    'runner': 'DirectRunner',
    'batch_size': 1024,
    'data_dir': 'corpus/*.txt',
    'output_dir': output_dir,
    'model_url': model_url,
    'random_projection_matrix': random_projection_matrix,
}

print("Pipeline args are set.")
args
A Gaussian random weight matrix was creates with shape of (128, 64)
Storing random projection matrix to disk...
Pipeline args are set.
/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/sklearn/utils/deprecation.py:86: FutureWarning: Function gaussian_random_matrix is deprecated; gaussian_random_matrix is deprecated in 0.22 and will be removed in version 0.24.
  warnings.warn(msg, category=FutureWarning)
{'job_name': 'hub2emb-221214-215229',
 'runner': 'DirectRunner',
 'batch_size': 1024,
 'data_dir': 'corpus/*.txt',
 'output_dir': '/tmpfs/tmp/tmp8mwf4nm1',
 'model_url': 'https://tfhub.dev/google/nnlm-en-dim128/2',
 'random_projection_matrix': array([[ 0.04652396,  0.08763902,  0.23433171, ..., -0.27297541,
          0.03003381,  0.03201392],
        [ 0.03699847,  0.13023345,  0.21451349, ..., -0.00677001,
         -0.14845663,  0.04364772],
        [-0.21373535, -0.05965295,  0.08734345, ...,  0.1977847 ,
         -0.19067334, -0.07191868],
        ...,
        [ 0.16077312,  0.11088097, -0.11435093, ..., -0.27603424,
          0.01509658, -0.01286358],
        [-0.06534427,  0.31156683,  0.05260638, ...,  0.15128775,
          0.04908765,  0.06651652],
        [ 0.10027883,  0.18285818,  0.00615748, ..., -0.08596848,
          0.18062248, -0.03001226]])}
print("Running pipeline...")
%time run_hub2emb(args)
print("Pipeline is done.")
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
Running pipeline...
WARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['-f', '/tmpfs/tmp/tmpy7whh1_l.json', '--HistoryManager.hist_file=:memory:']
WARNING:apache_beam.options.pipeline_options:Discarding invalid overrides: {'batch_size': 1024, 'data_dir': 'corpus/*.txt', 'output_dir': '/tmpfs/tmp/tmp8mwf4nm1', 'model_url': 'https://tfhub.dev/google/nnlm-en-dim128/2', 'random_projection_matrix': array([[ 0.04652396,  0.08763902,  0.23433171, ..., -0.27297541,
         0.03003381,  0.03201392],
       [ 0.03699847,  0.13023345,  0.21451349, ..., -0.00677001,
        -0.14845663,  0.04364772],
       [-0.21373535, -0.05965295,  0.08734345, ...,  0.1977847 ,
        -0.19067334, -0.07191868],
       ...,
       [ 0.16077312,  0.11088097, -0.11435093, ..., -0.27603424,
         0.01509658, -0.01286358],
       [-0.06534427,  0.31156683,  0.05260638, ...,  0.15128775,
         0.04908765,  0.06651652],
       [ 0.10027883,  0.18285818,  0.00615748, ..., -0.08596848,
         0.18062248, -0.03001226]])}
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
CPU times: user 32min 6s, sys: 35min 44s, total: 1h 7min 50s
Wall time: 2min 37s
Pipeline is done.
ls {output_dir}
emb-00000-of-00001.tfrecords

读取生成的部分嵌入向量…

embed_file = os.path.join(output_dir, 'emb-00000-of-00001.tfrecords')
sample = 5

# Create a description of the features.
feature_description = {
    'text': tf.io.FixedLenFeature([], tf.string),
    'embedding': tf.io.FixedLenFeature([projected_dim], tf.float32)
}

def _parse_example(example):
  # Parse the input `tf.Example` proto using the dictionary above.
  return tf.io.parse_single_example(example, feature_description)

dataset = tf.data.TFRecordDataset(embed_file)
for record in dataset.take(sample).map(_parse_example):
  print("{}: {}".format(record['text'].numpy().decode('utf-8'), record['embedding'].numpy()[:10]))
headline_text: [ 0.2082475  -0.0758548  -0.31624907 -0.03358706  0.17254627  0.01766025
  0.12841697  0.04604644  0.09919017 -0.33217654]
aba decides against community broadcasting licence: [ 0.09103882  0.15510604  0.04679758 -0.034989   -0.09605023  0.2866751
  0.14425828  0.10986     0.23642978 -0.07701185]
act fire witnesses must be aware of defamation: [-0.0675336  -0.05577297  0.3920365  -0.13575198  0.15832202 -0.08768208
  0.20531057  0.09696919  0.20545112 -0.00981951]
a g calls for infrastructure protection summit: [ 0.02449628  0.11818574  0.06907529 -0.1773398  -0.01997611  0.12673114
  0.01616809 -0.01666613  0.23377973  0.00869564]
air nz staff in aust strike for pay rise: [-0.06299734  0.21203837  0.16526641  0.09273504 -0.04289411  0.24168277
 -0.01821963  0.24795456 -0.11762056 -0.13406506]

3. 为嵌入向量构建 ANN 索引

ANNOY (Approximate Nearest Neighbors Oh Yeah) 是一个包含 Python 绑定的 C++ 库,用于搜索空间中与给定查询点接近的点。此外,它还会创建基于文件的大型只读数据结构,这些数据结构会映射到内存中。它由 Spotify 构建并用于音乐推荐。如果您感兴趣,可以尝试使用 ANNOY 的其他替代库,例如 NGTFAISS 等。

def build_index(embedding_files_pattern, index_filename, vector_length, 
    metric='angular', num_trees=100):
  '''Builds an ANNOY index'''

  annoy_index = annoy.AnnoyIndex(vector_length, metric=metric)
  # Mapping between the item and its identifier in the index
  mapping = {}

  embed_files = tf.io.gfile.glob(embedding_files_pattern)
  num_files = len(embed_files)
  print('Found {} embedding file(s).'.format(num_files))

  item_counter = 0
  for i, embed_file in enumerate(embed_files):
    print('Loading embeddings in file {} of {}...'.format(i+1, num_files))
    dataset = tf.data.TFRecordDataset(embed_file)
    for record in dataset.map(_parse_example):
      text = record['text'].numpy().decode("utf-8")
      embedding = record['embedding'].numpy()
      mapping[item_counter] = text
      annoy_index.add_item(item_counter, embedding)
      item_counter += 1
      if item_counter % 100000 == 0:
        print('{} items loaded to the index'.format(item_counter))

  print('A total of {} items added to the index'.format(item_counter))

  print('Building the index with {} trees...'.format(num_trees))
  annoy_index.build(n_trees=num_trees)
  print('Index is successfully built.')

  print('Saving index to disk...')
  annoy_index.save(index_filename)
  print('Index is saved to disk.')
  print("Index file size: {} GB".format(
    round(os.path.getsize(index_filename) / float(1024 ** 3), 2)))
  annoy_index.unload()

  print('Saving mapping to disk...')
  with open(index_filename + '.mapping', 'wb') as handle:
    pickle.dump(mapping, handle, protocol=pickle.HIGHEST_PROTOCOL)
  print('Mapping is saved to disk.')
  print("Mapping file size: {} MB".format(
    round(os.path.getsize(index_filename + '.mapping') / float(1024 ** 2), 2)))
embedding_files = "{}/emb-*.tfrecords".format(output_dir)
embedding_dimension = projected_dim
index_filename = "index"

!rm {index_filename}
!rm {index_filename}.mapping

%time build_index(embedding_files, index_filename, embedding_dimension)
rm: cannot remove 'index': No such file or directory
rm: cannot remove 'index.mapping': No such file or directory
Found 1 embedding file(s).
Loading embeddings in file 1 of 1...
100000 items loaded to the index
200000 items loaded to the index
300000 items loaded to the index
400000 items loaded to the index
500000 items loaded to the index
600000 items loaded to the index
700000 items loaded to the index
800000 items loaded to the index
900000 items loaded to the index
1000000 items loaded to the index
1100000 items loaded to the index
A total of 1103664 items added to the index
Building the index with 100 trees...
Index is successfully built.
Saving index to disk...
Index is saved to disk.
Index file size: 1.6 GB
Saving mapping to disk...
Mapping is saved to disk.
Mapping file size: 50.61 MB
CPU times: user 9min 31s, sys: 55.7 s, total: 10min 27s
Wall time: 3min 44s
ls
corpus         random_projection_matrix
index          raw.tsv
index.mapping  tf2_semantic_approximate_nearest_neighbors.ipynb

4. 使用索引进行相似度匹配

现在,我们可以使用 ANN 索引查找与输入查询语义接近的新闻标题。

加载索引和映射文件

index = annoy.AnnoyIndex(embedding_dimension)
index.load(index_filename, prefault=True)
print('Annoy index is loaded.')
with open(index_filename + '.mapping', 'rb') as handle:
  mapping = pickle.load(handle)
print('Mapping file is loaded.')
Annoy index is loaded.
/tmpfs/tmp/ipykernel_73317/1659470767.py:1: FutureWarning: The default argument for metric will be removed in future version of Annoy. Please pass metric='angular' explicitly.
  index = annoy.AnnoyIndex(embedding_dimension)
Mapping file is loaded.

相似度匹配方法

def find_similar_items(embedding, num_matches=5):
  '''Finds similar items to a given embedding in the ANN index'''
  ids = index.get_nns_by_vector(
  embedding, num_matches, search_k=-1, include_distances=False)
  items = [mapping[i] for i in ids]
  return items

从给定查询中提取嵌入向量

# Load the TF-Hub model
print("Loading the TF-Hub model...")
%time embed_fn = hub.load(model_url)
print("TF-Hub model is loaded.")

random_projection_matrix = None
if os.path.exists('random_projection_matrix'):
  print("Loading random projection matrix...")
  with open('random_projection_matrix', 'rb') as handle:
    random_projection_matrix = pickle.load(handle)
  print('random projection matrix is loaded.')

def extract_embeddings(query):
  '''Generates the embedding for the query'''
  query_embedding =  embed_fn([query])[0].numpy()
  if random_projection_matrix is not None:
    query_embedding = query_embedding.dot(random_projection_matrix)
  return query_embedding
Loading the TF-Hub model...
CPU times: user 546 ms, sys: 329 ms, total: 875 ms
Wall time: 904 ms
TF-Hub model is loaded.
Loading random projection matrix...
random projection matrix is loaded.
extract_embeddings("Hello Machine Learning!")[:10]
array([ 0.035803  , -0.08583559, -0.35316336,  0.14571103, -0.02016695,
        0.07517727,  0.03061715, -0.21651856, -0.26817065, -0.05891859])

输入查询以查找最相似的条目

Generating embedding for the query...
CPU times: user 4.96 ms, sys: 0 ns, total: 4.96 ms
Wall time: 2.27 ms

Finding relevant items in the index...
CPU times: user 475 µs, sys: 0 ns, total: 475 µs
Wall time: 488 µs

Results:
=========
confronting global challenges
european leaders meet over global economic crisis
pacific island nations facing business challenges
world bank issues global warming warning
conference examines challenges facing major cities
climate changing our landscape research
global warming carbon emissions emerging economies
the meat industry is facing a growing global push
global approach urged in tropics development
economic pressures on emerging markets

了解更多信息

您可以在 tensorflow.org 上详细了解 TensorFlow,并在 tensorflow.org/hub 上查看 TF-Hub API 文档。此外,还可以在 tfhub.dev 上找到可用的 TensorFlow Hub 模型,包括更多的文本嵌入向量模型和图像特征矢量模型。

另外,请查看机器学习速成课程,这是 Google 提供的针对机器学习的快节奏实用介绍。