Google I/O is a wrap! Catch up on TensorFlow sessions View sessions

Using text and neural network features

View on Run in Google Colab View on GitHub Download notebook See TF Hub model

Welcome to the Intermediate Colab for TensorFlow Decision Forests (TF-DF). In this colab, you will learn about some more advanced capabilities of TF-DF, including how to deal with natural language features.

This colab assumes you are familiar with the concepts presented the Beginner colab, notably about the installation about TF-DF.

In this colab, you will:

  1. Train a Random Forest that consumes text features natively as categorical sets.

  2. Train a Random Forest that consumes text features using a TensorFlow Hub module. In this setting (transfer learning), the module is already pre-trained on a large text corpus.

  3. Train a Gradient Boosted Decision Trees (GBDT) and a Neural Network together. The GBDT will consume the output of the Neural Network.


# Install TensorFlow Dececision Forests
pip install tensorflow_decision_forests

Wurlitzer is needed to display the detailed training logs in Colabs (when using verbose=2 in the model constructor).

pip install wurlitzer

Import the necessary libraries.

import tensorflow_decision_forests as tfdf

import os
import numpy as np
import pandas as pd
import tensorflow as tf
import math
WARNING:root:TF Parameter Server distributed training not available (this is expected for the pre-build release).

The hidden code cell limits the output height in colab.

Use raw text as features

TF-DF can consume categorical-set features natively. Categorical-sets represent text features as bags of words (or n-grams).

For example: "The little blue dog"{"the", "little", "blue", "dog"}

In this example, you'll will train a Random Forest on the Stanford Sentiment Treebank (SST) dataset. The objective of this dataset is to classify sentences as carrying a positive or negative sentiment. You'll will use the binary classification version of the dataset curated in TensorFlow Datasets.

# Install the nighly TensorFlow Datasets package
# TODO: Remove when the release package is fixed.
pip install tfds-nightly -U --quiet
# Load the dataset
import tensorflow_datasets as tfds
all_ds = tfds.load("glue/sst2")

# Display the first 3 examples of the test fold.
for example in all_ds["test"].take(3):
  print({attr_name: attr_tensor.numpy() for attr_name, attr_tensor in example.items()})
{'idx': 163, 'label': -1, 'sentence': b'not even the hanson brothers can save it'}
{'idx': 131, 'label': -1, 'sentence': b'strong setup and ambitious goals fade as the film descends into unsophisticated scare tactics and b-film thuggery .'}
{'idx': 1579, 'label': -1, 'sentence': b'too timid to bring a sense of closure to an ugly chapter of the twentieth century .'}
2022-04-19 11:18:39.134353: W tensorflow/core/kernels/data/] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

The dataset is modified as follows:

  1. The raw labels are integers in {-1, 1}, but the learning algorithm expects positive integer labels e.g. {0, 1}. Therefore, the labels are transformed as follows: new_labels = (original_labels + 1) / 2.
  2. A batch-size of 64 is applied to make reading the dataset more efficient.
  3. The sentence attribute needs to be tokenized, i.e. "hello world" -> ["hello", "world"].

Details: Some decision forest learning algorithms do not need a validation dataset (e.g. Random Forests) while others do (e.g. Gradient Boosted Trees in some cases). Since each learning algorithm under TF-DF can use validation data differently, TF-DF handles train/validation splits internally. As a result, when you have a training and validation sets, they can always be concatenated as input to the learning algorithm.

def prepare_dataset(example):
  label = (example["label"] + 1) // 2
  return {"sentence" : tf.strings.split(example["sentence"])}, label

train_ds = all_ds["train"].batch(100).map(prepare_dataset)
test_ds = all_ds["validation"].batch(100).map(prepare_dataset)

Finaly, train and evaluate the model as usual. TF-DF automatically detects multi-valued categorical features as categorical-set.

%set_cell_height 300

# Specify the model.
model_1 = tfdf.keras.RandomForestModel(num_trees=30)

# Train the model.
<IPython.core.display.Javascript object>
Use /tmp/tmpxyrhihug as temporary training directory
Starting reading the dataset
663/674 [============================>.] - ETA: 0s
Dataset read in 0:00:06.104445
Training model
Model trained in 0:03:22.830962
Compiling model
[INFO] Loading model from path
674/674 [==============================] - 209s 305ms/step
[INFO] Engine "RandomForestGeneric" built
[INFO] Use fast generic engine
WARNING:tensorflow:AutoGraph could not transform <function simple_ml_inference_op_with_handle at 0x7f2cc949ed40> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING:tensorflow:AutoGraph could not transform <function simple_ml_inference_op_with_handle at 0x7f2cc949ed40> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
WARNING: AutoGraph could not transform <function simple_ml_inference_op_with_handle at 0x7f2cc949ed40> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: could not get source code
To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert
<keras.callbacks.History at 0x7f2cc95c7610>

In the previous logs, note that sentence is a CATEGORICAL_SET feature.

The model is evaluated as usual:

evaluation = model_1.evaluate(test_ds)

print(f"BinaryCrossentropyloss: {evaluation[0]}")
print(f"Accuracy: {evaluation[1]}")
9/9 [==============================] - 1s 5ms/step - loss: 0.0000e+00 - accuracy: 0.7638
BinaryCrossentropyloss: 0.0
Accuracy: 0.7637614607810974

The training logs looks are follow:

import matplotlib.pyplot as plt

logs = model_1.make_inspector().training_logs()
plt.plot([log.num_trees for log in logs], [log.evaluation.accuracy for log in logs])
plt.xlabel("Number of trees")
plt.ylabel("Out-of-bag accuracy")


More trees would probably be beneficial (I am sure of it because I tried :p).

Use a pretrained text embedding

The previous example trained a Random Forest using raw text features. This example will use a pre-trained TF-Hub embedding to convert text features into a dense embedding, and then train a Random Forest on top of it. In this situation, the Random Forest will only "see" the numerical output of the embedding (i.e. it will not see the raw text).

In this experiment, will use the Universal-Sentence-Encoder. Different pre-trained embeddings might be suited for different types of text (e.g. different language, different task) but also for other type of structured features (e.g. images).

The embedding module can be applied in one of two places:

  1. During the dataset preparation.
  2. In the pre-processing stage of the model.

The second option is often preferable: Packaging the embedding in the model makes the model easier to use (and harder to misuse).

First install TF-Hub:

pip install --upgrade tensorflow-hub

Unlike before, you don't need to tokenize the text.

def prepare_dataset(example):
  label = (example["label"] + 1) // 2
  return {"sentence" : example["sentence"]}, label

train_ds = all_ds["train"].batch(100).map(prepare_dataset)
test_ds = all_ds["validation"].batch(100).map(prepare_dataset)
%set_cell_height 300

import tensorflow_hub as hub
# NNLM ( is also a good choice.
hub_url = ""
embedding = hub.KerasLayer(hub_url)

sentence = tf.keras.layers.Input(shape=(), name="sentence", dtype=tf.string)
embedded_sentence = embedding(sentence)

raw_inputs = {"sentence": sentence}
processed_inputs = {"embedded_sentence": embedded_sentence}
preprocessor = tf.keras.Model(inputs=raw_inputs, outputs=processed_inputs)

model_2 = tfdf.keras.RandomForestModel(
<IPython.core.display.Javascript object>
Use /tmp/tmppflygn3h as temporary training directory
Starting reading the dataset
674/674 [==============================] - ETA: 0s
Dataset read in 0:00:27.292272
Training model
Model trained in 0:00:38.622818
Compiling model
[INFO] Loading model from path
674/674 [==============================] - 68s 91ms/step
[INFO] Engine "RandomForestOptPred" built
[INFO] Use fast generic engine
<keras.callbacks.History at 0x7f2bf428ad10>
evaluation = model_2.evaluate(test_ds)

print(f"BinaryCrossentropyloss: {evaluation[0]}")
print(f"Accuracy: {evaluation[1]}")
9/9 [==============================] - 2s 21ms/step - loss: 0.0000e+00 - accuracy: 0.7982
BinaryCrossentropyloss: 0.0
Accuracy: 0.7981651425361633

Note that categorical sets represent text differently from a dense embedding, so it may be useful to use both strategies jointly.

Train a decision tree and neural network together

The previous example used a pre-trained Neural Network (NN) to process the text features before passing them to the Random Forest. This example will train both the Neural Network and the Random Forest from scratch.

TF-DF's Decision Forests do not back-propagate gradients (although this is the subject of ongoing research). Therefore, the training happens in two stages:

  1. Train the neural-network as a standard classification task:
example → [Normalize] → [Neural Network*] → [classification head] → prediction
*: Training.
  1. Replace the Neural Network's head (the last layer and the soft-max) with a Random Forest. Train the Random Forest as usual:
example → [Normalize] → [Neural Network] → [Random Forest*] → prediction
*: Training.

Prepare the dataset

This example uses the Palmer's Penguins dataset. See the Beginner colab for details.

First, download the raw data:

wget -q -O /tmp/penguins.csv

Load a dataset into a Pandas Dataframe.

dataset_df = pd.read_csv("/tmp/penguins.csv")

# Display the first 3 examples.

Prepare the dataset for training.

label = "species"

# Replaces numerical NaN (representing missing values in Pandas Dataframe) with 0s.
# ...Neural Nets don't work well with numerical NaNs.
for col in dataset_df.columns:
  if dataset_df[col].dtype not in [str, object]:
    dataset_df[col] = dataset_df[col].fillna(0)
# Split the dataset into a training and testing dataset.

def split_dataset(dataset, test_ratio=0.30):
  """Splits a panda dataframe in two."""
  test_indices = np.random.rand(len(dataset)) < test_ratio
  return dataset[~test_indices], dataset[test_indices]

train_ds_pd, test_ds_pd = split_dataset(dataset_df)
print("{} examples in training, {} examples for testing.".format(
    len(train_ds_pd), len(test_ds_pd)))

# Convert the datasets into tensorflow datasets
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(train_ds_pd, label=label)
test_ds = tfdf.keras.pd_dataframe_to_tf_dataset(test_ds_pd, label=label)
237 examples in training, 107 examples for testing.
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_decision_forests/keras/ FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only
  features_dataframe = dataframe.drop(label, 1)

Build the models

Next create the neural network model using Keras' functional style.

To keep the example simple this model only uses two inputs.

input_1 = tf.keras.Input(shape=(1,), name="bill_length_mm", dtype="float")
input_2 = tf.keras.Input(shape=(1,), name="island", dtype="string")

nn_raw_inputs = [input_1, input_2]

Use preprocessing layers to convert the raw inputs to inputs apropriate for the neural netrwork.

# Normalization.
Normalization = tf.keras.layers.Normalization
CategoryEncoding = tf.keras.layers.CategoryEncoding
StringLookup = tf.keras.layers.StringLookup

values = train_ds_pd["bill_length_mm"].values[:, tf.newaxis]
input_1_normalizer = Normalization()

values = train_ds_pd["island"].values
input_2_indexer = StringLookup(max_tokens=32)

input_2_onehot = CategoryEncoding(output_mode="binary", max_tokens=32)

normalized_input_1 = input_1_normalizer(input_1)
normalized_input_2 = input_2_onehot(input_2_indexer(input_2))

nn_processed_inputs = [normalized_input_1, normalized_input_2]
WARNING:tensorflow:max_tokens is deprecated, please use num_tokens instead.
WARNING:tensorflow:max_tokens is deprecated, please use num_tokens instead.

Build the body of the neural network:

y = tf.keras.layers.Concatenate()(nn_processed_inputs)
y = tf.keras.layers.Dense(16, activation=tf.nn.relu6)(y)
last_layer = tf.keras.layers.Dense(8, activation=tf.nn.relu, name="last")(y)

# "3" for the three label classes. If it were a binary classification, the
# output dim would be 1.
classification_output = tf.keras.layers.Dense(3)(y)

nn_model = tf.keras.models.Model(nn_raw_inputs, classification_output)

This nn_model directly produces classification logits.

Next create a decision forest model. This will operate on the high level features that the neural network extracts in the last layer before that classification head.

# To reduce the risk of mistakes, group both the decision forest and the
# neural network in a single keras model.
nn_without_head = tf.keras.models.Model(inputs=nn_model.inputs, outputs=last_layer)
df_and_nn_model = tfdf.keras.RandomForestModel(preprocessing=nn_without_head)
Use /tmp/tmpcuvjgu90 as temporary training directory

Train and evaluate the models

The model will be trained in two stages. First train the neural network with its own classification head:

%set_cell_height 300

  metrics=["accuracy"]), validation_data=test_ds, epochs=10)
<IPython.core.display.Javascript object>
Epoch 1/10
/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/engine/ UserWarning: Input dict contained keys ['bill_depth_mm', 'flipper_length_mm', 'body_mass_g', 'sex', 'year'] which did not match any model input. They will be ignored by the model.
  inputs = self._flatten_to_reference_inputs(inputs)
1/1 [==============================] - 1s 745ms/step - loss: 0.9676 - accuracy: 0.4641 - val_loss: 0.9688 - val_accuracy: 0.3925
Epoch 2/10
1/1 [==============================] - 0s 22ms/step - loss: 0.9640 - accuracy: 0.4684 - val_loss: 0.9650 - val_accuracy: 0.3925
Epoch 3/10
1/1 [==============================] - 0s 20ms/step - loss: 0.9603 - accuracy: 0.4684 - val_loss: 0.9612 - val_accuracy: 0.3925
Epoch 4/10
1/1 [==============================] - 0s 19ms/step - loss: 0.9566 - accuracy: 0.4684 - val_loss: 0.9574 - val_accuracy: 0.3925
Epoch 5/10
1/1 [==============================] - 0s 20ms/step - loss: 0.9530 - accuracy: 0.4684 - val_loss: 0.9536 - val_accuracy: 0.3925
Epoch 6/10
1/1 [==============================] - 0s 20ms/step - loss: 0.9494 - accuracy: 0.4726 - val_loss: 0.9499 - val_accuracy: 0.3925
Epoch 7/10
1/1 [==============================] - 0s 21ms/step - loss: 0.9457 - accuracy: 0.4852 - val_loss: 0.9461 - val_accuracy: 0.4019
Epoch 8/10
1/1 [==============================] - 0s 21ms/step - loss: 0.9421 - accuracy: 0.4937 - val_loss: 0.9424 - val_accuracy: 0.4299
Epoch 9/10
1/1 [==============================] - 0s 19ms/step - loss: 0.9385 - accuracy: 0.5316 - val_loss: 0.9387 - val_accuracy: 0.4393
Epoch 10/10
1/1 [==============================] - 0s 20ms/step - loss: 0.9349 - accuracy: 0.5359 - val_loss: 0.9350 - val_accuracy: 0.4486
Model: "model_1"
 Layer (type)                   Output Shape         Param #     Connected to                     
 island (InputLayer)            [(None, 1)]          0           []                               
 bill_length_mm (InputLayer)    [(None, 1)]          0           []                               
 string_lookup (StringLookup)   (None, 1)            0           ['island[0][0]']                 
 normalization (Normalization)  (None, 1)            3           ['bill_length_mm[0][0]']         
 category_encoding (CategoryEnc  (None, 32)          0           ['string_lookup[0][0]']          
 concatenate (Concatenate)      (None, 33)           0           ['normalization[0][0]',          
 dense (Dense)                  (None, 16)           544         ['concatenate[0][0]']            
 dense_1 (Dense)                (None, 3)            51          ['dense[0][0]']                  
Total params: 598
Trainable params: 595
Non-trainable params: 3

The neural network layers are shared between the two models. So now that the neural network is trained the decision forest model will be fit to the trained output of the neural network layers:

%set_cell_height 300
<IPython.core.display.Javascript object>
Starting reading the dataset
1/1 [==============================] - ETA: 0s
Dataset read in 0:00:00.178299
Training model
Model trained in 0:00:00.031376
Compiling model
1/1 [==============================] - 0s 227ms/step
[INFO] Loading model from path
[INFO] Use fast generic engine
<keras.callbacks.History at 0x7f2bf6706c50>

Now evaluate the composed model:

print("Evaluation:", df_and_nn_model.evaluate(test_ds))
1/1 [==============================] - 0s 156ms/step - loss: 0.0000e+00 - accuracy: 0.9533
Evaluation: [0.0, 0.9532710313796997]

Compare it to the Neural Network alone:

print("Evaluation :", nn_model.evaluate(test_ds))
1/1 [==============================] - 0s 14ms/step - loss: 0.9350 - accuracy: 0.4486
Evaluation : [0.9350149035453796, 0.44859811663627625]