Text classification with TensorFlow Lite Model Maker

View on TensorFlow.org View source on GitHub Download notebook

The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.

This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.


Install the required packages

To run this example, install the required packages, including the Model Maker package from the GitHub repo.

pip install tflite-model-maker

Import the required packages.

import numpy as np
import os

import tensorflow as tf
assert tf.__version__.startswith('2')

from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker import TextClassifierDataLoader

Get the data path

Download the dataset for this tutorial.

data_dir = tf.keras.utils.get_file(
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')
Downloading data from https://dl.fbaipublicfiles.com/glue/data/SST-2.zip
7446528/7439277 [==============================] - 0s 0us/step

You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.

Upload File

If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.

End-to-End Workflow

This workflow consists of five steps as outlined below:

Step 1. Choose a model specification that represents a text classification model.

This tutorial uses MobileBERT as an example.

spec = model_spec.get('mobilebert_classifier')

Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec.

train_data = TextClassifierDataLoader.from_csv(
      filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
test_data = TextClassifierDataLoader.from_csv(
      filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),

Step 3. Customize the TensorFlow model.

model = text_classifier.create(train_data, model_spec=spec)

Step 4. Evaluate the model.

loss, acc = model.evaluate(test_data)

Step 5. Export as a TensorFlow Lite model with metadata.

Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation.

config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config.experimental_new_quantizer = True
model.export(export_dir='mobilebert/', quantization_config=config)

You can also download the model using the left sidebar in Colab.

After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library.

The following sections walk through the example step by step to show more detail.

Choose a model_spec that Represents a Model for Text Classifier

Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.

Supported Model Name of model_spec Model Description
MobileBERT 'mobilebert_classifier' 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications.
BERT-Base 'bert_classifier' Standard BERT model that is widely used in NLP tasks.
averaging word embedding 'average_word_vec' Averaging text word embeddings with RELU activation.

This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process.

spec = model_spec.get('average_word_vec')

Load Input Data Specific to an On-device ML App

The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes: positive and negative movie reviews.

Download the archived version of the dataset and extract it.

data_dir = tf.keras.utils.get_file(
data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')

The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format:

sentence label
it 's a charming and often affecting journey . 1
unflinchingly bleak and desperate 0

A positive review is labeled 1 and a negative review is labeled 0.

Use the TestClassifierDataLoader.from_csv method to load the data.

train_data = TextClassifierDataLoader.from_csv(
      filename=os.path.join(os.path.join(data_dir, 'train.tsv')),
test_data = TextClassifierDataLoader.from_csv(
      filename=os.path.join(os.path.join(data_dir, 'dev.tsv')),

The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.

Customize the TensorFlow Model

Create a custom text classifier model based on the loaded data.

model = text_classifier.create(train_data, model_spec=spec, epochs=10)

Examine the detailed model structure.


Evaluate the Customized Model

Evaluate the model with the test data and get its loss and accuracy.

loss, acc = model.evaluate(test_data)

Export as a TensorFlow Lite Model

Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.


The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library.

The allowed export formats can be one or a list of the following:

  • ExportFormat.TFLITE
  • ExportFormat.LABEL
  • ExportFormat.VOCAB
  • ExportFormat.SAVED_MODEL

By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the label file and vocab file as follows:

model.export(export_dir='average_word_vec/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])

You can evaluate the tflite model with evaluate_tflite method to get its accuracy.

accuracy = model.evaluate_tflite('average_word_vec/model.tflite', test_data)

Advanced Usage

The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps:

  1. Creates the model for the text classifier according to model_spec.
  2. Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.

This section covers advanced usage topics like adjusting the model and the training hyperparameters.

Adjust the model

You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecModelSpec class.

For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.

new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32)

Get the preprocessed data.

new_train_data = TextClassifierDataLoader.from_csv(
      filename=os.path.join(os.path.join(data_dir, 'train.tsv')),

Train the new model.

model = text_classifier.create(new_train_data, model_spec=new_model_spec)

You can also adjust the MobileBERT model.

The model parameters you can adjust are:

  • seq_len: Length of the sequence to feed into the model.
  • initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • trainable: Boolean that specifies whether the pre-trained layer is trainable.

The training pipeline parameters you can adjust are:

  • model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.
  • dropout_rate: The dropout rate.
  • learning_rate: The initial learning rate for the Adam optimizer.
  • tpu: TPU address to connect to.

For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.

new_model_spec = model_spec.get('mobilebert_classifier')
new_model_spec.seq_len = 256

Tune the training hyperparameters

You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,

  • epochs: more epochs could achieve better accuracy, but may lead to overfitting.
  • batch_size: the number of samples to use in one training step.

For example, you can train with more epochs.

model = text_classifier.create(train_data, model_spec=spec, epochs=20)

Evaluate the newly retrained model with 20 training epochs.

loss, accuracy = model.evaluate(test_data)

Change the Model Architecture

You can change the model by changing the model_spec. The following shows how to change to BERT-Base model.

Change the model_spec to BERT-Base model for the text classifier.

spec = model_spec.get('bert_classifier')

The remaining steps are the same.