Load text

View on TensorFlow.org View source on GitHub Download notebook

This tutorial provides an example of how to use tf.data.TextLineDataset to load examples from text files. TextLineDataset is designed to create a dataset from a text file, in which each example is a line of text from the original file. This is potentially useful for any text data that is primarily line-based (for example, poetry or error logs).

In this tutorial, we'll use three different English translations of the same work, Homer's Illiad, and train a model to identify the translator given a single line of text.

Setup

pip install -q tfds-nightly
WARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
You should consider upgrading via the '/tmpfs/src/tf_docs_env/bin/python -m pip install --upgrade pip' command.

import tensorflow as tf

import tensorflow_datasets as tfds
import os

The texts of the three translations are by:

The text files used in this tutorial have undergone some typical preprocessing tasks, mostly removing stuff — document header and footer, line numbers, chapter titles. Download these lightly munged files locally.

DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']

for name in FILE_NAMES:
  text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name)
  
parent_dir = os.path.dirname(text_dir)

parent_dir
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt
819200/815980 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/derby.txt
811008/809730 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/illiad/butler.txt
811008/807992 [==============================] - 0s 0us/step

'/home/kbuilder/.keras/datasets'

Load text into datasets

Iterate through the files, loading each one into its own dataset.

Each example needs to be individually labeled, so use tf.data.Dataset.map to apply a labeler function to each one. This will iterate over every example in the dataset, returning (example, label) pairs.

def labeler(example, index):
  return example, tf.cast(index, tf.int64)  

labeled_data_sets = []

for i, file_name in enumerate(FILE_NAMES):
  lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name))
  labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
  labeled_data_sets.append(labeled_dataset)

Combine these labeled datasets into a single dataset, and shuffle it.

BUFFER_SIZE = 50000
BATCH_SIZE = 64
TAKE_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
  all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
  
all_labeled_data = all_labeled_data.shuffle(
    BUFFER_SIZE, reshuffle_each_iteration=False)

You can use tf.data.Dataset.take and print to see what the (example, label) pairs look like. The numpy property shows each Tensor's value.

for ex in all_labeled_data.take(5):
  print(ex)
(<tf.Tensor: shape=(), dtype=string, numpy=b'Of Troy, seeing the body borne away,'>, <tf.Tensor: shape=(), dtype=int64, numpy=0>)
(<tf.Tensor: shape=(), dtype=string, numpy=b"With her own children, for her husband's sake.">, <tf.Tensor: shape=(), dtype=int64, numpy=1>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'That task refused, but went; yet neither swift'>, <tf.Tensor: shape=(), dtype=int64, numpy=0>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'Shouldst thou from spear or sword receive a wound,'>, <tf.Tensor: shape=(), dtype=int64, numpy=1>)
(<tf.Tensor: shape=(), dtype=string, numpy=b'Then to Idaean Jove, the cloud-girt son'>, <tf.Tensor: shape=(), dtype=int64, numpy=1>)

Encode text lines as numbers

Machine learning models work on numbers, not words, so the string values need to be converted into lists of numbers. To do that, map each unique word to a unique integer.

Build vocabulary

First, build a vocabulary by tokenizing the text into a collection of individual unique words. There are a few ways to do this in both TensorFlow and Python. For this tutorial:

  1. Iterate over each example's numpy value.
  2. Use tfds.deprecated.text.Tokenizer to split it into tokens.
  3. Collect these tokens into a Python set, to remove duplicates.
  4. Get the size of the vocabulary for later use.
tokenizer = tfds.deprecated.text.Tokenizer()

vocabulary_set = set()
for text_tensor, _ in all_labeled_data:
  some_tokens = tokenizer.tokenize(text_tensor.numpy())
  vocabulary_set.update(some_tokens)

vocab_size = len(vocabulary_set)
vocab_size
17178

Encode examples

Create an encoder by passing the vocabulary_set to tfds.deprecated.text.TokenTextEncoder. The encoder's encode method takes in a string of text and returns a list of integers.

encoder = tfds.deprecated.text.TokenTextEncoder(vocabulary_set)

You can try this on a single line to see what the output looks like.

example_text = next(iter(all_labeled_data))[0].numpy()
print(example_text)
b'Of Troy, seeing the body borne away,'

encoded_example = encoder.encode(example_text)
print(encoded_example)
[16791, 1581, 325, 1885, 16189, 15504, 15638]

Now run the encoder on the dataset by wrapping it in tf.py_function and passing that to the dataset's map method.

def encode(text_tensor, label):
  encoded_text = encoder.encode(text_tensor.numpy())
  return encoded_text, label

You want to use Dataset.map to apply this function to each element of the dataset. Dataset.map runs in graph mode.

  • Graph tensors do not have a value.
  • In graph mode you can only use TensorFlow Ops and functions.

So you can't .map this function directly: You need to wrap it in a tf.py_function. The tf.py_function will pass regular tensors (with a value and a .numpy() method to access it), to the wrapped python function.

def encode_map_fn(text, label):
  # py_func doesn't set the shape of the returned tensors.
  encoded_text, label = tf.py_function(encode, 
                                       inp=[text, label], 
                                       Tout=(tf.int64, tf.int64))

  # `tf.data.Datasets` work best if all components have a shape set
  #  so set the shapes manually: 
  encoded_text.set_shape([None])
  label.set_shape([])

  return encoded_text, label


all_encoded_data = all_labeled_data.map(encode_map_fn)

Split the dataset into test and train batches

Use tf.data.Dataset.take and tf.data.Dataset.skip to create a small test dataset and a larger training set.

Before being passed into the model, the datasets need to be batched. Typically, the examples inside of a batch need to be the same size and shape. But, the examples in these datasets are not all the same size — each line of text had a different number of words. So use tf.data.Dataset.padded_batch (instead of batch) to pad the examples to the same size.

train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)

test_data = all_encoded_data.take(TAKE_SIZE)
test_data = test_data.padded_batch(BATCH_SIZE)

Now, test_data and train_data are not collections of (example, label) pairs, but collections of batches. Each batch is a pair of (many examples, many labels) represented as arrays.

To illustrate:

sample_text, sample_labels = next(iter(test_data))

sample_text[0], sample_labels[0]
(<tf.Tensor: shape=(16,), dtype=int64, numpy=
 array([16791,  1581,   325,  1885, 16189, 15504, 15638,     0,     0,
            0,     0,     0,     0,     0,     0,     0])>,
 <tf.Tensor: shape=(), dtype=int64, numpy=0>)

Since we have introduced a new token encoding (the zero used for padding), the vocabulary size has increased by one.

vocab_size += 1

Build the model

model = tf.keras.Sequential()

The first layer converts integer representations to dense vector embeddings. See the word embeddings tutorial or more details.

model.add(tf.keras.layers.Embedding(vocab_size, 64))

The next layer is a Long Short-Term Memory layer, which lets the model understand words in their context with other words. A bidirectional wrapper on the LSTM helps it to learn about the datapoints in relationship to the datapoints that came before it and after it.

model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))

Finally we'll have a series of one or more densely connected layers, with the last one being the output layer. The output layer produces a probability for all the labels. The one with the highest probability is the models prediction of an example's label.

# One or more dense layers.
# Edit the list in the `for` line to experiment with layer sizes.
for units in [64, 64]:
  model.add(tf.keras.layers.Dense(units, activation='relu'))

# Output layer. The first argument is the number of labels.
model.add(tf.keras.layers.Dense(3))

Finally, compile the model. For a softmax categorization model, use sparse_categorical_crossentropy as the loss function. You can try other optimizers, but adam is very common.

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

Train the model

This model running on this data produces decent results (about 83%).

model.fit(train_data, epochs=3, validation_data=test_data)
Epoch 1/3
697/697 [==============================] - 9s 13ms/step - loss: 0.4998 - accuracy: 0.7583 - val_loss: 0.3916 - val_accuracy: 0.8204
Epoch 2/3
697/697 [==============================] - 9s 12ms/step - loss: 0.2875 - accuracy: 0.8772 - val_loss: 0.3948 - val_accuracy: 0.8234
Epoch 3/3
697/697 [==============================] - 8s 12ms/step - loss: 0.2151 - accuracy: 0.9089 - val_loss: 0.4081 - val_accuracy: 0.8228

<tensorflow.python.keras.callbacks.History at 0x7f54dc018dd8>
eval_loss, eval_acc = model.evaluate(test_data)

print('\nEval loss: {:.3f}, Eval accuracy: {:.3f}'.format(eval_loss, eval_acc))
79/79 [==============================] - 1s 17ms/step - loss: 0.4081 - accuracy: 0.8228

Eval loss: 0.408, Eval accuracy: 0.823