tf.image.stateless_sample_distorted_bounding_box
Stay organized with collections
Save and categorize content based on your preferences.
Generate a randomly distorted bounding box for an image deterministically.
tf.image.stateless_sample_distorted_bounding_box(
image_size,
bounding_boxes,
seed,
min_object_covered=0.1,
aspect_ratio_range=None,
area_range=None,
max_attempts=None,
use_image_if_no_bounding_boxes=None,
name=None
)
Bounding box annotations are often supplied in addition to ground-truth labels
in image recognition or object localization tasks. A common technique for
training such a system is to randomly distort an image while preserving
its content, i.e. data augmentation. This Op, given the same seed
,
deterministically outputs a randomly distorted localization of an object, i.e.
bounding box, given an image_size
, bounding_boxes
and a series of
constraints.
The output of this Op is a single bounding box that may be used to crop the
original image. The output is returned as 3 tensors: begin
, size
and
bboxes
. The first 2 tensors can be fed directly into tf.slice
to crop the
image. The latter may be supplied to tf.image.draw_bounding_boxes
to
visualize what the bounding box looks like.
Bounding boxes are supplied and returned as [y_min, x_min, y_max, x_max]
.
The bounding box coordinates are floats in [0.0, 1.0]
relative to the width
and the height of the underlying image.
The output of this Op is guaranteed to be the same given the same seed
and
is independent of how many times the function is called, and independent of
global seed settings (e.g. tf.random.set_seed
).
Example usage:
image = np.array([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])
bbox = tf.constant(
[0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])
seed = (1, 2)
# Generate a single distorted bounding box.
bbox_begin, bbox_size, bbox_draw = (
tf.image.stateless_sample_distorted_bounding_box(
tf.shape(image), bounding_boxes=bbox, seed=seed))
# Employ the bounding box to distort the image.
tf.slice(image, bbox_begin, bbox_size)
<tf.Tensor: shape=(2, 2, 1), dtype=int64, numpy=
array([[[1],
[2]],
[[4],
[5]]])>
# Draw the bounding box in an image summary.
colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])
tf.image.draw_bounding_boxes(
tf.expand_dims(tf.cast(image, tf.float32),0), bbox_draw, colors)
<tf.Tensor: shape=(1, 3, 3, 1), dtype=float32, numpy=
array([[[[1.],
[1.],
[3.]],
[[1.],
[1.],
[6.]],
[[7.],
[8.],
[9.]]]], dtype=float32)>
Note that if no bounding box information is available, setting
use_image_if_no_bounding_boxes = true
will assume there is a single implicit
bounding box covering the whole image. If use_image_if_no_bounding_boxes
is
false and no bounding boxes are supplied, an error is raised.
Args |
image_size
|
A Tensor . Must be one of the following types: uint8 , int8 ,
int16 , int32 , int64 . 1-D, containing [height, width, channels] .
|
bounding_boxes
|
A Tensor of type float32 . 3-D with shape [batch, N, 4]
describing the N bounding boxes associated with the image.
|
seed
|
A shape [2] Tensor, the seed to the random number generator. Must have
dtype int32 or int64 . (When using XLA, only int32 is allowed.)
|
min_object_covered
|
A Tensor of type float32 . Defaults to 0.1 . The
cropped area of the image must contain at least this fraction of any
bounding box supplied. The value of this parameter should be non-negative.
In the case of 0, the cropped area does not need to overlap any of the
bounding boxes supplied.
|
aspect_ratio_range
|
An optional list of floats . Defaults to [0.75,
1.33] . The cropped area of the image must have an aspect ratio = width /
height within this range.
|
area_range
|
An optional list of floats . Defaults to [0.05, 1] . The
cropped area of the image must contain a fraction of the supplied image
within this range.
|
max_attempts
|
An optional int . Defaults to 100 . Number of attempts at
generating a cropped region of the image of the specified constraints.
After max_attempts failures, return the entire image.
|
use_image_if_no_bounding_boxes
|
An optional bool . Defaults to False .
Controls behavior if no bounding boxes supplied. If true, assume an
implicit bounding box covering the whole input. If false, raise an error.
|
name
|
A name for the operation (optional).
|
Returns |
A tuple of Tensor objects (begin, size, bboxes).
|
begin
|
A Tensor . Has the same type as image_size . 1-D, containing
[offset_height, offset_width, 0] . Provide as input to
tf.slice .
|
size
|
A Tensor . Has the same type as image_size . 1-D, containing
[target_height, target_width, -1] . Provide as input to
tf.slice .
|
bboxes
|
A Tensor of type float32 . 3-D with shape [1, 1, 4] containing
the distorted bounding box.
Provide as input to tf.image.draw_bounding_boxes .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.image.stateless_sample_distorted_bounding_box\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/image_ops_impl.py#L3535-L3656) |\n\nGenerate a randomly distorted bounding box for an image deterministically. \n\n tf.image.stateless_sample_distorted_bounding_box(\n image_size,\n bounding_boxes,\n seed,\n min_object_covered=0.1,\n aspect_ratio_range=None,\n area_range=None,\n max_attempts=None,\n use_image_if_no_bounding_boxes=None,\n name=None\n )\n\nBounding box annotations are often supplied in addition to ground-truth labels\nin image recognition or object localization tasks. A common technique for\ntraining such a system is to randomly distort an image while preserving\nits content, i.e. *data augmentation* . This Op, given the same `seed`,\ndeterministically outputs a randomly distorted localization of an object, i.e.\nbounding box, given an `image_size`, `bounding_boxes` and a series of\nconstraints.\n\nThe output of this Op is a single bounding box that may be used to crop the\noriginal image. The output is returned as 3 tensors: `begin`, `size` and\n`bboxes`. The first 2 tensors can be fed directly into [`tf.slice`](../../tf/slice) to crop the\nimage. The latter may be supplied to [`tf.image.draw_bounding_boxes`](../../tf/image/draw_bounding_boxes) to\nvisualize what the bounding box looks like.\n\nBounding boxes are supplied and returned as `[y_min, x_min, y_max, x_max]`.\nThe bounding box coordinates are floats in `[0.0, 1.0]` relative to the width\nand the height of the underlying image.\n\nThe output of this Op is guaranteed to be the same given the same `seed` and\nis independent of how many times the function is called, and independent of\nglobal seed settings (e.g. [`tf.random.set_seed`](../../tf/random/set_seed)).\n\n#### Example usage:\n\n image = np.array([[[1], [2], [3]], [[4], [5], [6]], [[7], [8], [9]]])\n bbox = tf.constant(\n [0.0, 0.0, 1.0, 1.0], dtype=tf.float32, shape=[1, 1, 4])\n seed = (1, 2)\n # Generate a single distorted bounding box.\n bbox_begin, bbox_size, bbox_draw = (\n tf.image.stateless_sample_distorted_bounding_box(\n tf.shape(image), bounding_boxes=bbox, seed=seed))\n # Employ the bounding box to distort the image.\n tf.slice(image, bbox_begin, bbox_size)\n \u003ctf.Tensor: shape=(2, 2, 1), dtype=int64, numpy=\n array([[[1],\n [2]],\n [[4],\n [5]]])\u003e\n # Draw the bounding box in an image summary.\n colors = np.array([[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]])\n tf.image.draw_bounding_boxes(\n tf.expand_dims(tf.cast(image, tf.float32),0), bbox_draw, colors)\n \u003ctf.Tensor: shape=(1, 3, 3, 1), dtype=float32, numpy=\n array([[[[1.],\n [1.],\n [3.]],\n [[1.],\n [1.],\n [6.]],\n [[7.],\n [8.],\n [9.]]]], dtype=float32)\u003e\n\nNote that if no bounding box information is available, setting\n`use_image_if_no_bounding_boxes = true` will assume there is a single implicit\nbounding box covering the whole image. If `use_image_if_no_bounding_boxes` is\nfalse and no bounding boxes are supplied, an error is raised.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `image_size` | A `Tensor`. Must be one of the following types: `uint8`, `int8`, `int16`, `int32`, `int64`. 1-D, containing `[height, width, channels]`. |\n| `bounding_boxes` | A `Tensor` of type `float32`. 3-D with shape `[batch, N, 4]` describing the N bounding boxes associated with the image. |\n| `seed` | A shape \\[2\\] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) |\n| `min_object_covered` | A Tensor of type `float32`. Defaults to `0.1`. The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. In the case of 0, the cropped area does not need to overlap any of the bounding boxes supplied. |\n| `aspect_ratio_range` | An optional list of `floats`. Defaults to `[0.75, 1.33]`. The cropped area of the image must have an aspect `ratio = width / height` within this range. |\n| `area_range` | An optional list of `floats`. Defaults to `[0.05, 1]`. The cropped area of the image must contain a fraction of the supplied image within this range. |\n| `max_attempts` | An optional `int`. Defaults to `100`. Number of attempts at generating a cropped region of the image of the specified constraints. After `max_attempts` failures, return the entire image. |\n| `use_image_if_no_bounding_boxes` | An optional `bool`. Defaults to `False`. Controls behavior if no bounding boxes supplied. If true, assume an implicit bounding box covering the whole input. If false, raise an error. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| A tuple of `Tensor` objects (begin, size, bboxes). ||\n| `begin` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[offset_height, offset_width, 0]`. Provide as input to [`tf.slice`](../../tf/slice). |\n| `size` | A `Tensor`. Has the same type as `image_size`. 1-D, containing `[target_height, target_width, -1]`. Provide as input to [`tf.slice`](../../tf/slice). |\n| `bboxes` | A `Tensor` of type `float32`. 3-D with shape `[1, 1, 4]` containing the distorted bounding box. Provide as input to [`tf.image.draw_bounding_boxes`](../../tf/image/draw_bounding_boxes). |\n\n\u003cbr /\u003e"]]