tf.keras.layers.experimental.preprocessing.RandomZoom

Randomly zoom each image during training.

Inherits From: PreprocessingLayer, Layer, Module

height_factor a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for zooming vertically. When represented as a single float, this value is used for both the upper and lower bound. A positive value means zooming out, while a negative value means zooming in. For instance, height_factor=(0.2, 0.3) result in an output zoomed out by a random amount in the range [+20%, +30%]. height_factor=(-0.3, -0.2) result in an output zoomed in by a random amount in the range [+20%, +30%].
width_factor a float represented as fraction of value, or a tuple of size 2 representing lower and upper bound for zooming horizontally. When represented as a single float, this value is used for both the upper and lower bound. For instance, width_factor=(0.2, 0.3) result in an output zooming out between 20% to 30%. width_factor=(-0.3, -0.2) result in an output zooming in between 20% to 30%. Defaults to None, i.e., zooming vertical and horizontal directions by preserving the aspect ratio.
fill_mode Points outside the boundaries of the input are filled according to the given mode (one of {'constant', 'reflect', 'wrap', 'nearest'}).

  • reflect: (d c b a | a b c d | d c b a) The input is extended by reflecting about the edge of the last pixel.
  • constant: (k k k k | a b c d | k k k k) The input is extended by filling all values beyond the edge with the same constant value k = 0.
  • wrap: (a b c d | a b c d | a b c d) The input is extended by wrapping around to the opposite edge.
  • nearest: (a a a a | a b c d | d d d d) The input is extended by the nearest pixel.
interpolation Interpolation mode. Supported values: "nearest", "bilinear".
seed Integer. Used to create a random seed.
fill_value a float represents the value to be filled outside the boundaries when fill_mode is "constant".

Example: >>> input_img = np.random.random((32, 224, 224, 3)) >>> layer = tf.keras.layers.experimental.preprocessing.RandomZoom(.5, .2) >>> out_img = layer(input_img) >>> out_img.shape TensorShape([32, 224, 224, 3]) Input shape: 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Output shape: 4D tensor with shape: (samples, height, width, channels), data_format='channels_last'. Raise: ValueError: if lower bound is not between [0, 1], or upper bound is negative.

is_adapted Whether the layer has been fit to data already.
streaming Whether adapt can be called twice without resetting the state.

Methods

adapt

View source

Fits the state of the preprocessing layer to the data being passed.

Arguments
data The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
batch_size Integer or None. Number of samples per state update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
steps Integer or None. Total number of steps (batches of samples) When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and 'steps' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps argument. This argument is not supported with array inputs.
reset_state Optional argument specifying whether to clear the state of the layer at the start of the call to adapt, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False.

compile

View source

Configures the layer for adapt.

Arguments
run_eagerly Bool. Defaults to False. If True, this Model's logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. steps_per_execution: Int. Defaults to 1. The number of batches to run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead.

finalize_state

View source

Finalize the statistics for the preprocessing layer.

This method is called at the end of adapt. This method handles any one-time operations that should occur after all data has been seen.

make_adapt_function

View source

Creates a function to execute one step of adapt.

This method can be overridden to support custom adapt logic. This method is called by PreprocessingLayer.adapt.

Typically, this method directly controls tf.function settings, and delegates the actual state update logic to PreprocessingLayer.update_state.

This function is cached the first time PreprocessingLayer.adapt is called. The cache is cleared whenever PreprocessingLayer.compile is called.

Returns
Function. The function created by this method should accept a tf.data.Iterator, retrieve a batch, and update the state of the layer.

merge_state

View source

Merge the statistics of multiple preprocessing layers.

This layer will contain the merged state.

Arguments
layers Layers whose statistics should be merge with the statistics of this layer.

reset_state

View source

Resets the statistics of the preprocessing layer.

update_state

View source

Accumulates statistics for the preprocessing layer.

Arguments
data A mini-batch of inputs to the layer.