Integrate object detectors

Object detectors can identify which of a known set of objects might be present and provide information about their positions within the given image or a video stream. An object detector is trained to detect the presence and location of multiple classes of objects. For example, a model might be trained with images that contain various pieces of fruit, along with a label that specifies the class of fruit they represent (e.g. an apple, a banana, or a strawberry), and data specifying where each object appears in the image. See the introduction of object detection for more information about object detectors.

Use the Task Library ObjectDetector API to deploy your custom object detectors or pretrained ones into your model apps.

Key features of the ObjectDetector API

  • Input image processing, including rotation, resizing, and color space conversion.

  • Label map locale.

  • Score threshold to filter results.

  • Top-k detection results.

  • Label allowlist and denylist.

Supported object detector models

The following models are guaranteed to be compatible with the ObjectDetector API.

Run inference in Java

See the Object Detection reference app for an example of how to use ObjectDetector in an Android app.

Step 1: Import Gradle dependency and other settings

Copy the .tflite model file to the assets directory of the Android module where the model will be run. Specify that the file should not be compressed, and add the TensorFlow Lite library to the module’s build.gradle file:

android {
    // Other settings

    // Specify tflite file should not be compressed for the app apk
    aaptOptions {
        noCompress "tflite"
    }

}

dependencies {
    // Other dependencies

    // Import the Task Vision Library dependency
    implementation 'org.tensorflow:tensorflow-lite-task-vision:0.0.0-nightly'
}

Step 2: Using the model

// Initialization
ObjectDetectorOptions options = ObjectDetectorOptions.builder().setMaxResults(1).build();
ObjectDetector objectDetector = ObjectDetector.createFromFileAndOptions(context, modelFile, options);

// Run inference
List<Detection> results = objectDetector.detect(image);

See the source code and javadoc for more options to configure ObjectDetector.

Run inference in C++

// Initialization
ObjectDetectorOptions options;
options.mutable_model_file_with_metadata()->set_file_name(model_file);
std::unique_ptr<ObjectDetector> object_detector = ObjectDetector::CreateFromOptions(options).value();

// Run inference
const DetectionResult result = object_detector->Detect(*frame_buffer).value();

See the source code for more options to configure ObjectDetector.

Example results

Here is an example of the detection results of ssd mobilenet v1 from TensorFlow Hub.

dogs

Results:
 Detection #0 (red):
  Box: (x: 355, y: 133, w: 190, h: 206)
  Top-1 class:
   index       : 17
   score       : 0.73828
   class name  : dog
 Detection #1 (green):
  Box: (x: 103, y: 15, w: 138, h: 369)
  Top-1 class:
   index       : 17
   score       : 0.73047
   class name  : dog

Render the bounding boxes onto the input image:

detection output

Try out the simple CLI demo tool for ObjectDetector with your own model and test data.

Model compatibility requirements

The ObjectDetector API expects a TFLite model with mandatory TFLite Model Metadata.

The compatible object detector models should meet the following requirements:

  • Input image tensor: (kTfLiteUInt8/kTfLiteFloat32)

    • image input of size [batch x height x width x channels].
    • batch inference is not supported (batch is required to be 1).
    • only RGB inputs are supported (channels is required to be 3).
    • if type is kTfLiteFloat32, NormalizationOptions are required to be attached to the metadata for input normalization.
  • Output tensors must be the 4 outputs of a DetectionPostProcess op, i.e:

    • Locations tensor (kTfLiteFloat32)

      • tensor of size [1 x num_results x 4], the inner array representing bounding boxes in the form [top, left, right, bottom].

      • BoundingBoxProperties are required to be attached to the metadata and must specify type=BOUNDARIES and `coordinate_type=RATIO.

    • Classes tensor (kTfLiteFloat32)

      • tensor of size [1 x num_results], each value representing the integer index of a class.

      • optional (but recommended) label map(s) can be attached as AssociatedFile-s with type TENSOR_VALUE_LABELS, containing one label per line. The first such AssociatedFile (if any) is used to fill the class_name field of the results. The display_name field is filled from the AssociatedFile (if any) whose locale matches the display_names_locale field of the ObjectDetectorOptions used at creation time ("en" by default, i.e. English). If none of these are available, only the index field of the results will be filled.

    • Scores tensor (kTfLiteFloat32)

      • tensor of size [1 x num_results], each value representing the score of the detected object.
    • Number of detection tensor (kTfLiteFloat32)

      • integer num_results as a tensor of size [1].