TensorFlow 2 quickstart for beginners

View on TensorFlow.org View source on GitHub Download notebook

This short introduction uses Keras to:

  1. Build a neural network that classifies images.
  2. Train this neural network.
  3. And, finally, evaluate the accuracy of the model.

This is a Google Colaboratory notebook file. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page.

  1. In Colab, connect to a Python runtime: At the top-right of the menu bar, select CONNECT.
  2. Run all the notebook code cells: Select Runtime > Run all.

Download and install TensorFlow 2. Import TensorFlow into your program:

import tensorflow as tf

Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

Build the tf.keras.Sequential model by stacking layers. Choose an optimizer and loss function for training:

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

For each example the model returns a vector of "logits" or "log-odds" scores, one for each class.

predictions = model(x_train[:1]).numpy()
predictions
WARNING:tensorflow:Layer flatten is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.


array([[ 0.3071351 ,  0.5739746 , -0.17401719, -0.3290041 ,  0.23404527,
        -0.13384083, -0.5603041 , -0.39104465, -0.25885695, -0.74251395]],
      dtype=float32)

The tf.nn.softmax function converts these logits to "probabilities" for each class:

tf.nn.softmax(predictions).numpy()
array([[0.14574005, 0.19031185, 0.09007766, 0.07714489, 0.13546789,
        0.09377034, 0.06121457, 0.07250422, 0.0827507 , 0.05101784]],
      dtype=float32)

The losses.SparseCategoricalCrossentropy loss takes a vector of logits and a True index and returns a scalar loss for each example.

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

This loss is equal to the negative log probability of the true class: It is zero if the model is sure of the correct class.

This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.log(1/10) ~= 2.3.

loss_fn(y_train[:1], predictions).numpy()
2.3669066
model.compile(optimizer='adam',
              loss=loss_fn,
              metrics=['accuracy'])

The Model.fit method adjusts the model parameters to minimize the loss:

model.fit(x_train, y_train, epochs=5)
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 58us/sample - loss: 0.2994 - accuracy: 0.9121
Epoch 2/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.1428 - accuracy: 0.9580
Epoch 3/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.1067 - accuracy: 0.9679
Epoch 4/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.0889 - accuracy: 0.9721
Epoch 5/5
60000/60000 [==============================] - 3s 54us/sample - loss: 0.0750 - accuracy: 0.9761

<tensorflow.python.keras.callbacks.History at 0x7f0ccf32e630>

The Model.evaluate method checks the models performance, usually on a "Validation-set" or "Test-set".

model.evaluate(x_test,  y_test, verbose=2)
10000/10000 - 1s - loss: 0.0749 - accuracy: 0.9767

[0.0749218150134664, 0.9767]

The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials.

If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it:

probability_model = tf.keras.Sequential([
  model,
  tf.keras.layers.Softmax()
])
probability_model(x_test[:5])
<tf.Tensor: shape=(5, 10), dtype=float32, numpy=
array([[7.57983543e-09, 1.50700696e-09, 6.43297608e-06, 1.38727250e-04,
        9.80789343e-13, 1.31976194e-07, 6.88661639e-14, 9.99849916e-01,
        1.18184182e-07, 4.69114502e-06],
       [2.14187157e-08, 1.16113086e-04, 9.99856591e-01, 2.70504352e-05,
        3.64252289e-15, 2.72878253e-07, 1.28652262e-08, 2.96095939e-12,
        4.82463236e-10, 1.07519673e-15],
       [1.52459307e-08, 9.99122679e-01, 1.36599818e-04, 1.54646132e-05,
        1.19527940e-05, 1.94418158e-06, 3.51991412e-06, 3.04778077e-04,
        4.02745878e-04, 2.75298760e-07],
       [9.99851108e-01, 1.13111387e-08, 1.71586307e-05, 6.79041591e-07,
        1.91252229e-06, 1.31301831e-05, 7.38285162e-05, 1.44639880e-05,
        1.07363309e-08, 2.76342562e-05],
       [6.47496709e-05, 2.97747889e-08, 3.50159273e-04, 2.82755309e-06,
        9.78404462e-01, 3.38332438e-06, 4.00425088e-05, 2.76409526e-04,
        1.08745389e-05, 2.08470579e-02]], dtype=float32)>