Google I/O returns May 18-20! Reserve space and build your schedule

# tf.keras.layers.experimental.preprocessing.Normalization

Feature-wise normalization of the data.

Inherits From: `PreprocessingLayer`, `Layer`, `Module`

### Used in the notebooks

This layer will coerce its inputs into a distribution centered around 0 with standard deviation 1. It accomplishes this by precomputing the mean and variance of the data, and calling (input-mean)/sqrt(var) at runtime.

What happens in `adapt`: Compute mean and variance of the data and store them as the layer's weights. `adapt` should be called before `fit`, `evaluate`, or `predict`.

`axis` Integer or tuple of integers, the axis or axes that should be "kept". These axes are not be summed over when calculating the normalization statistics. By default the last axis, the `features` axis is kept and any `space` or `time` axes are summed. Each element in the the axes that are kept is normalized independently. If `axis` is set to 'None', the layer will perform scalar normalization (dividing the input by a single scalar value). The `batch` axis, 0, is always summed over (`axis=0` is not allowed).
`mean` The mean value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s) cannot be broadcast, an error will be raised when this layer's build() method is called.
`variance` The variance value(s) to use during normalization. The passed value(s) will be broadcast to the shape of the kept axes above; if the value(s)cannot be broadcast, an error will be raised when this layer's build() method is called.

#### Examples:

Calculate the mean and variance by analyzing the dataset in `adapt`.

````adapt_data = np.array([[1.], [2.], [3.], [4.], [5.]], dtype=np.float32)`
`input_data = np.array([[1.], [2.], [3.]], np.float32)`
`layer = Normalization()`
`layer.adapt(adapt_data)`
`layer(input_data)`
`<tf.Tensor: shape=(3, 1), dtype=float32, numpy=`
`array([[-1.4142135 ],`
`       [-0.70710677],`
`       [ 0.        ]], dtype=float32)>`
```

Pass the mean and variance directly.

````input_data = np.array([[1.], [2.], [3.]], np.float32)`
`layer = Normalization(mean=3., variance=2.)`
`layer(input_data)`
`<tf.Tensor: shape=(3, 1), dtype=float32, numpy=`
`array([[-1.4142135 ],`
`       [-0.70710677],`
`       [ 0.        ]], dtype=float32)>`
```

`is_adapted` Whether the layer has been fit to data already.
`streaming` Whether `adapt` can be called twice without resetting the state.

## Methods

### `adapt`

View source

Fits the state of the preprocessing layer to the data being passed.

Arguments
`data` The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array.
`batch_size` Integer or `None`. Number of samples per state update. If unspecified, `batch_size` will default to 32. Do not specify the `batch_size` if your data is in the form of datasets, generators, or `keras.utils.Sequence` instances (since they generate batches).
`steps` Integer or `None`. Total number of steps (batches of samples) When training with input tensors such as TensorFlow data tensors, the default `None` is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a `tf.data` dataset, and 'steps' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the `steps` argument. This argument is not supported with array inputs.
`reset_state` Optional argument specifying whether to clear the state of the layer at the start of the call to `adapt`, or whether to start from the existing state. This argument may not be relevant to all preprocessing layers: a subclass of PreprocessingLayer may choose to throw if 'reset_state' is set to False.

### `compile`

View source

Configures the layer for `adapt`.

Arguments
`run_eagerly` Bool. Defaults to `False`. If `True`, this `Model`'s logic will not be wrapped in a `tf.function`. Recommended to leave this as `None` unless your `Model` cannot be run inside a `tf.function`. steps_per_execution: Int. Defaults to 1. The number of batches to run during each `tf.function` call. Running multiple batches inside a single `tf.function` call can greatly improve performance on TPUs or small models with a large Python overhead.

### `finalize_state`

View source

Finalize the statistics for the preprocessing layer.

This method is called at the end of `adapt`. This method handles any one-time operations that should occur after all data has been seen.

### `make_adapt_function`

View source

Creates a function to execute one step of `adapt`.

This method can be overridden to support custom adapt logic. This method is called by `PreprocessingLayer.adapt`.

Typically, this method directly controls `tf.function` settings, and delegates the actual state update logic to `PreprocessingLayer.update_state`.

This function is cached the first time `PreprocessingLayer.adapt` is called. The cache is cleared whenever `PreprocessingLayer.compile` is called.

Returns
Function. The function created by this method should accept a `tf.data.Iterator`, retrieve a batch, and update the state of the layer.

### `merge_state`

View source

Merge the statistics of multiple preprocessing layers.

This layer will contain the merged state.

Arguments
`layers` Layers whose statistics should be merge with the statistics of this layer.

### `reset_state`

View source

Resets the statistics of the preprocessing layer.

### `update_state`

View source

Accumulates statistics for the preprocessing layer.

Arguments
`data` A mini-batch of inputs to the layer.