TensorFlow 2.0 Beta is available Learn more

Text generation with an RNN

View on TensorFlow.org View source on GitHub Download notebook

This tutorial demonstrates how to generate text using a character-based RNN. We will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.

This tutorial includes runnable code implemented using tf.keras and eager execution. The following is sample output when the model in this tutorial trained for 30 epochs, and started with the string "Q":

QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.

BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?

ESCALUS:
The cause why then we are all resolved more sons.

VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.

QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.

PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m

While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:

  • The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.

  • The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.

  • As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.

Setup

Import TensorFlow and other libraries

from __future__ import absolute_import, division, print_function, unicode_literals

try:
  # %tensorflow_version only exists in Colab.
  %tensorflow_version 2.x
except Exception:
  pass
import tensorflow as tf

import numpy as np
import os
import time

Download the Shakespeare dataset

Change the following line to run this code on your own data.

path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt
1122304/1115394 [==============================] - 0s 0us/step

Read the data

First, look in the text:

# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print ('Length of text: {} characters'.format(len(text)))
Length of text: 1115394 characters
# Take a look at the first 250 characters in text
print(text[:250])
First Citizen:
Before we proceed any further, hear me speak.

All:
Speak, speak.

First Citizen:
You are all resolved rather to die than to famish?

All:
Resolved. resolved.

First Citizen:
First, you know Caius Marcius is chief enemy to the people.

# The unique characters in the file
vocab = sorted(set(text))
print ('{} unique characters'.format(len(vocab)))
65 unique characters

Process the text

Vectorize the text

Before training, we need to map strings to a numerical representation. Create two lookup tables: one mapping characters to numbers, and another for numbers to characters.

# Creating a mapping from unique characters to indices
char2idx = {u:i for i, u in enumerate(vocab)}
idx2char = np.array(vocab)

text_as_int = np.array([char2idx[c] for c in text])

Now we have an integer representation for each character. Notice that we mapped the character as indexes from 0 to len(unique).

print('{')
for char,_ in zip(char2idx, range(20)):
    print('  {:4s}: {:3d},'.format(repr(char), char2idx[char]))
print('  ...\n}')
{
  'h' :  46,
  'k' :  49,
  '.' :   8,
  'm' :  51,
  'H' :  20,
  'K' :  23,
  'A' :  13,
  'V' :  34,
  'X' :  36,
  'u' :  59,
  '?' :  12,
  'r' :  56,
  'L' :  24,
  'T' :  32,
  'g' :  45,
  "'" :   5,
  'R' :  30,
  's' :  57,
  'e' :  43,
  'U' :  33,
  ...
}
# Show how the first 13 characters from the text are mapped to integers
print ('{} ---- characters mapped to int ---- > {}'.format(repr(text[:13]), text_as_int[:13]))
'First Citizen' ---- characters mapped to int ---- > [18 47 56 57 58  1 15 47 58 47 64 43 52]

The prediction task

Given a character, or a sequence of characters, what is the most probable next character? This is the task we're training the model to perform. The input to the model will be a sequence of characters, and we train the model to predict the output—the following character at each time step.

Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?

Create training examples and targets

Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.

For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.

So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".

To do this first use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.

# The maximum length sentence we want for a single input in characters
seq_length = 100
examples_per_epoch = len(text)//seq_length

# Create training examples / targets
char_dataset = tf.data.Dataset.from_tensor_slices(text_as_int)

for i in char_dataset.take(5):
  print(idx2char[i.numpy()])
F
i
r
s
t

The batch method lets us easily convert these individual characters to sequences of the desired size.

sequences = char_dataset.batch(seq_length+1, drop_remainder=True)

for item in sequences.take(5):
  print(repr(''.join(idx2char[item.numpy()])))
'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '
'are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you k'
"now Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us ki"
"ll him, and we'll have corn at our own price.\nIs't a verdict?\n\nAll:\nNo more talking on't; let it be d"
'one: away, away!\n\nSecond Citizen:\nOne word, good citizens.\n\nFirst Citizen:\nWe are accounted poor citi'

For each sequence, duplicate and shift it to form the input and target text by using the map method to apply a simple function to each batch:

def split_input_target(chunk):
    input_text = chunk[:-1]
    target_text = chunk[1:]
    return input_text, target_text

dataset = sequences.map(split_input_target)

Print the first examples input and target values:

for input_example, target_example in  dataset.take(1):
  print ('Input data: ', repr(''.join(idx2char[input_example.numpy()])))
  print ('Target data:', repr(''.join(idx2char[target_example.numpy()])))
Input data:  'First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou'
Target data: 'irst Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou '

Each index of these vectors are processed as one time step. For the input at time step 0, the model receives the index for "F" and trys to predict the index for "i" as the next character. At the next timestep, it does the same thing but the RNN considers the previous step context in addition to the current input character.

for i, (input_idx, target_idx) in enumerate(zip(input_example[:5], target_example[:5])):
    print("Step {:4d}".format(i))
    print("  input: {} ({:s})".format(input_idx, repr(idx2char[input_idx])))
    print("  expected output: {} ({:s})".format(target_idx, repr(idx2char[target_idx])))
Step    0
  input: 18 ('F')
  expected output: 47 ('i')
Step    1
  input: 47 ('i')
  expected output: 56 ('r')
Step    2
  input: 56 ('r')
  expected output: 57 ('s')
Step    3
  input: 57 ('s')
  expected output: 58 ('t')
Step    4
  input: 58 ('t')
  expected output: 1 (' ')

Create training batches

We used tf.data to split the text into manageable sequences. But before feeding this data into the model, we need to shuffle the data and pack it into batches.

# Batch size
BATCH_SIZE = 64

# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000

dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)

dataset
<BatchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>

Build The Model

Use tf.keras.Sequential to define the model. For this simple example three layers are used to define our model:

# Length of the vocabulary in chars
vocab_size = len(vocab)

# The embedding dimension
embedding_dim = 256

# Number of RNN units
rnn_units = 1024
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
  model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vocab_size, embedding_dim,
                              batch_input_shape=[batch_size, None]),
    tf.keras.layers.LSTM(rnn_units,
                        return_sequences=True,
                        stateful=True,
                        recurrent_initializer='glorot_uniform'),
    tf.keras.layers.Dense(vocab_size)
  ])
  return model
model = build_model(
  vocab_size = len(vocab),
  embedding_dim=embedding_dim,
  rnn_units=rnn_units,
  batch_size=BATCH_SIZE)

For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:

A drawing of the data passing through the model

Try the model

Now run the model to see that it behaves as expected.

First check the shape of the output:

for input_example_batch, target_example_batch in dataset.take(1):
  example_batch_predictions = model(input_example_batch)
  print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
(64, 100, 65) # (batch_size, sequence_length, vocab_size)

In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (64, None, 256)           16640     
_________________________________________________________________
lstm (LSTM)                  (64, None, 1024)          5246976   
_________________________________________________________________
dense (Dense)                (64, None, 65)            66625     
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________

To get actual predictions from the model we need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.

Try it for the first example in the batch:

sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()

This gives us, at each timestep, a prediction of the next character index:

sampled_indices
array([34,  2,  1, 48, 15, 46, 54, 12, 50, 19, 22, 60, 50,  6, 61, 49, 28,
        2, 58, 13, 15, 16, 22, 45, 28, 58, 49, 26, 37, 60, 51,  0, 22, 27,
       39, 45, 56,  2, 29,  4,  3, 29, 37, 42, 63, 63, 31, 26, 14, 13, 30,
       27, 22, 49, 25,  3, 42, 27, 50, 16, 39, 35, 22, 51, 44, 64, 62,  9,
       43,  2, 18,  2, 57, 57, 25, 23, 20, 56, 52, 59, 61, 64, 34, 60, 16,
       28, 48,  1, 48, 32, 61, 49, 17,  8, 52, 52, 54, 36,  3, 52])

Decode these to see the text predicted by this untrained model:

print("Input: \n", repr("".join(idx2char[input_example_batch[0]])))
print()
print("Next Char Predictions: \n", repr("".join(idx2char[sampled_indices ])))
Input: 
 " be with you, sir: here comes my man.\n\nMERCUTIO:\nBut I'll be hanged, sir, if he wear your livery:\nMa"

Next Char Predictions: 
 'V! jChp?lGJvl,wkP!tACDJgPtkNYvm\nJOagr!Q&$QYdyySNBAROJkM$dOlDaWJmfzx3e!F!ssMKHrnuwzVvDPj jTwkE.nnpX$n'

Train the model

At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.

Attach an optimizer, and a loss function

The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.

Because our model returns logits, we need to set the from_logits flag.

def loss(labels, logits):
  return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True)

example_batch_loss  = loss(target_example_batch, example_batch_predictions)
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("scalar_loss:      ", example_batch_loss.numpy().mean())
Prediction shape:  (64, 100, 65)  # (batch_size, sequence_length, vocab_size)
scalar_loss:       4.175206

Configure the training procedure using the tf.keras.Model.compile method. We'll use tf.keras.optimizers.Adam with default arguments and the loss function.

model.compile(optimizer='adam', loss=loss)

Configure checkpoints

Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:

# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")

checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_prefix,
    save_weights_only=True)

Execute the training

To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.

EPOCHS=10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
Epoch 1/10
172/172 [==============================] - 8s 47ms/step - loss: 2.5480
Epoch 2/10
172/172 [==============================] - 7s 41ms/step - loss: 1.8576
Epoch 3/10
172/172 [==============================] - 7s 40ms/step - loss: 1.6153
Epoch 4/10
172/172 [==============================] - 7s 40ms/step - loss: 1.4852
Epoch 5/10
172/172 [==============================] - 7s 40ms/step - loss: 1.4045
Epoch 6/10
172/172 [==============================] - 7s 40ms/step - loss: 1.3438
Epoch 7/10
172/172 [==============================] - 7s 41ms/step - loss: 1.2924
Epoch 8/10
172/172 [==============================] - 7s 40ms/step - loss: 1.2455
Epoch 9/10
172/172 [==============================] - 7s 41ms/step - loss: 1.2004
Epoch 10/10
172/172 [==============================] - 7s 41ms/step - loss: 1.1561

Generate text

Restore the latest checkpoint

To keep this prediction step simple, use a batch size of 1.

Because of the way the RNN state is passed from timestep to timestep, the model only accepts a fixed batch size once built.

To run the model with a different batch_size, we need to rebuild the model and restore the weights from the checkpoint.

tf.train.latest_checkpoint(checkpoint_dir)
'./training_checkpoints/ckpt_10'
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size=1)

model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))

model.build(tf.TensorShape([1, None]))
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (1, None, 256)            16640     
_________________________________________________________________
lstm_1 (LSTM)                (1, None, 1024)           5246976   
_________________________________________________________________
dense_1 (Dense)              (1, None, 65)             66625     
=================================================================
Total params: 5,330,241
Trainable params: 5,330,241
Non-trainable params: 0
_________________________________________________________________

The prediction loop

The following code block generates the text:

  • It Starts by choosing a start string, initializing the RNN state and setting the number of characters to generate.

  • Get the prediction distribution of the next character using the start string and the RNN state.

  • Then, use a categorical distribution to calculate the index of the predicted character. Use this predicted character as our next input to the model.

  • The RNN state returned by the model is fed back into the model so that it now has more context, instead than only one word. After predicting the next word, the modified RNN states are again fed back into the model, which is how it learns as it gets more context from the previously predicted words.

To generate text the model's output is fed back to the input

Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.

def generate_text(model, start_string):
  # Evaluation step (generating text using the learned model)

  # Number of characters to generate
  num_generate = 1000

  # Converting our start string to numbers (vectorizing)
  input_eval = [char2idx[s] for s in start_string]
  input_eval = tf.expand_dims(input_eval, 0)

  # Empty string to store our results
  text_generated = []

  # Low temperatures results in more predictable text.
  # Higher temperatures results in more surprising text.
  # Experiment to find the best setting.
  temperature = 1.0

  # Here batch size == 1
  model.reset_states()
  for i in range(num_generate):
      predictions = model(input_eval)
      # remove the batch dimension
      predictions = tf.squeeze(predictions, 0)

      # using a categorical distribution to predict the word returned by the model
      predictions = predictions / temperature
      predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy()

      # We pass the predicted word as the next input to the model
      # along with the previous hidden state
      input_eval = tf.expand_dims([predicted_id], 0)

      text_generated.append(idx2char[predicted_id])

  return (start_string + ''.join(text_generated))
print(generate_text(model, start_string=u"ROMEO: "))
ROMEO: ONTHUQEIZBUESPUJOSLALOVVQIEIKFGDVMJSBHCYxJWDUSCM-MEDYBVORJSSDYCQUDMVIVzOMbHUDSUBJMUKSVXZ$MESSDVSUY&CSSUKExBUMDYVNQIVKXVUSLPVJMSEMENMzSxU&DHQEQAQUKDIMVTIEKMXZZGMUQENV3RY$UMFQVUKENMBDQHXSHMIUSUNVDSUDNXJKQSXVVVDUKYJJVDKDVCHESUMUYESYISVHVHSSFUGMKGMVUEM$YVUpPSFR:'ZIJQIUHqGVJKDHBSjUNU3SKFpNqKUZYIXDSUDUVzT3gHR$M3EY$zHIVJQ3UBDVVEEMOFNVCIFBEGYCU;UGXEBBQE&CUzMDqYBIQSUKSKSMHPEvIFMBYBVBHEXMMU&xIDF?FZQUUEMNETEX?DBGVWUTASHUIA&SQSKXUESSMPHVPjBTR$UGCBHI3EUQS!UXH&XqKEXzSVAxMAQUVDkxEBHOSVSKJUVYqERA$GIVCHFUS&ZEDMjrXjOKQDAKEKEIEMFcQxOZDXBHBBWIKIEXKBJNRDU$FZVDBKMIH$PB!KEMYQPMOUCKVIExSZHQYVSQH3UKEMAENDZHMZNVQXQUFQUVFEZKQV$GSKHzSXzYSIFHUMJBADMUQUENVSZjOQUEK3VECFYHVQ3FDGVFZ$LEUGDYBVZOFHMIZQDDUNCCLRKVIOSESXDMUSEDBD&UCFQUEDLDBQLU3E$WuZYPOLMAHzWRJQHJHXENUVZVDSXVQHEHSSUAZDEK&$DMBZMHUOYBKSBI3X$EMQHQOESFVHFSMVHMSJUBTVEJUCJZSUjUqFZHEFVEDIZDDW$CKUKCNZFXFXUZzEZGMzXOKF!FSXAMSUESVESjIxXX$DUKEECjMAxjUGSSSPUZMVUZSxxqAQ3ULQKYWSVIECYVO?&VCxQEVSPLVABHEBSSNSKAXMxBXMASAKDKM$VCBHJKJMYADBYMSUEFxVN$ZxDFQUSQKAYMURKESN&ITDGDExNFK&S

The easiest thing you can do to improve the results it to train it for longer (try EPOCHS=30).

You can also experiment with a different start string, or try adding another RNN layer to improve the model's accuracy, or adjusting the temperature parameter to generate more or less random predictions.

Advanced: Customized Training

The above training procedure is simple, but does not give you much control.

So now that you've seen how to run the model manually let's unpack the training loop, and implement it ourselves. This gives a starting point if, for example, to implement curriculum learning to help stabilize the model's open-loop output.

We will use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.

The procedure works as follows:

  • First, initialize the RNN state. We do this by calling the tf.keras.Model.reset_states method.

  • Next, iterate over the dataset (batch by batch) and calculate the predictions associated with each.

  • Open a tf.GradientTape, and calculate the predictions and loss in that context.

  • Calculate the gradients of the loss with respect to the model variables using the tf.GradientTape.grads method.

  • Finally, take a step downwards by using the optimizer's tf.train.Optimizer.apply_gradients method.

model = build_model(
  vocab_size = len(vocab),
  embedding_dim=embedding_dim,
  rnn_units=rnn_units,
  batch_size=BATCH_SIZE)
optimizer = tf.keras.optimizers.Adam()
@tf.function
def train_step(inp, target):
  with tf.GradientTape() as tape:
    predictions = model(inp)
    loss = tf.reduce_mean(
        tf.keras.losses.sparse_categorical_crossentropy(
            target, predictions, from_logits=True))
  grads = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(grads, model.trainable_variables))

  return loss
# Training step
EPOCHS = 10

for epoch in range(EPOCHS):
  start = time.time()

  # initializing the hidden state at the start of every epoch
  # initally hidden is None
  hidden = model.reset_states()

  for (batch_n, (inp, target)) in enumerate(dataset):
    loss = train_step(inp, target)

    if batch_n % 100 == 0:
      template = 'Epoch {} Batch {} Loss {}'
      print(template.format(epoch+1, batch_n, loss))

  # saving (checkpoint) the model every 5 epochs
  if (epoch + 1) % 5 == 0:
    model.save_weights(checkpoint_prefix.format(epoch=epoch))

  print ('Epoch {} Loss {:.4f}'.format(epoch+1, loss))
  print ('Time taken for 1 epoch {} sec\n'.format(time.time() - start))

model.save_weights(checkpoint_prefix.format(epoch=epoch))
WARNING: Logging before flag parsing goes to stderr.
W0813 07:57:36.443305 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer
W0813 07:57:36.444629 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer.iter
W0813 07:57:36.445312 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer.beta_1
W0813 07:57:36.446024 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer.beta_2
W0813 07:57:36.446608 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer.decay
W0813 07:57:36.447133 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer.learning_rate
W0813 07:57:36.447662 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-0.embeddings
W0813 07:57:36.449571 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.kernel
W0813 07:57:36.450513 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-2.bias
W0813 07:57:36.451382 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.cell.kernel
W0813 07:57:36.451889 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.cell.recurrent_kernel
W0813 07:57:36.452367 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'm' for (root).layer_with_weights-1.cell.bias
W0813 07:57:36.452934 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-0.embeddings
W0813 07:57:36.453603 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.kernel
W0813 07:57:36.455451 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-2.bias
W0813 07:57:36.456562 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.cell.kernel
W0813 07:57:36.457501 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.cell.recurrent_kernel
W0813 07:57:36.458117 140696170657536 util.py:244] Unresolved object in checkpoint: (root).optimizer's state 'v' for (root).layer_with_weights-1.cell.bias
W0813 07:57:36.458719 140696170657536 util.py:252] A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/alpha/guide/checkpoints#loading_mechanics for details.

Epoch 1 Batch 0 Loss 4.174019813537598
Epoch 1 Batch 100 Loss 2.3775124549865723
Epoch 1 Loss 2.1027
Time taken for 1 epoch 9.158066034317017 sec

Epoch 2 Batch 0 Loss 2.12690806388855
Epoch 2 Batch 100 Loss 1.909305453300476
Epoch 2 Loss 1.7603
Time taken for 1 epoch 6.484481334686279 sec

Epoch 3 Batch 0 Loss 1.7526904344558716
Epoch 3 Batch 100 Loss 1.6817200183868408
Epoch 3 Loss 1.5793
Time taken for 1 epoch 6.36880898475647 sec

Epoch 4 Batch 0 Loss 1.5597143173217773
Epoch 4 Batch 100 Loss 1.549416184425354
Epoch 4 Loss 1.4773
Time taken for 1 epoch 6.525808572769165 sec

Epoch 5 Batch 0 Loss 1.4518569707870483
Epoch 5 Batch 100 Loss 1.4662234783172607
Epoch 5 Loss 1.4067
Time taken for 1 epoch 6.352639436721802 sec

Epoch 6 Batch 0 Loss 1.377824068069458
Epoch 6 Batch 100 Loss 1.400776982307434
Epoch 6 Loss 1.3490
Time taken for 1 epoch 6.326950550079346 sec

Epoch 7 Batch 0 Loss 1.3217206001281738
Epoch 7 Batch 100 Loss 1.3434251546859741
Epoch 7 Loss 1.2985
Time taken for 1 epoch 6.268807649612427 sec

Epoch 8 Batch 0 Loss 1.2727553844451904
Epoch 8 Batch 100 Loss 1.2905683517456055
Epoch 8 Loss 1.2490
Time taken for 1 epoch 6.359203577041626 sec

Epoch 9 Batch 0 Loss 1.2264424562454224
Epoch 9 Batch 100 Loss 1.2426477670669556
Epoch 9 Loss 1.2024
Time taken for 1 epoch 6.265423774719238 sec

Epoch 10 Batch 0 Loss 1.1840736865997314
Epoch 10 Batch 100 Loss 1.2026817798614502
Epoch 10 Loss 1.1639
Time taken for 1 epoch 6.480224132537842 sec