tf.contrib.layers.instance_norm
Stay organized with collections
Save and categorize content based on your preferences.
Functional interface for the instance normalization layer.
tf.contrib.layers.instance_norm(
inputs, center=True, scale=True, epsilon=1e-06, activation_fn=None,
param_initializers=None, reuse=None, variables_collections=None,
outputs_collections=None, trainable=True, data_format=DATA_FORMAT_NHWC,
scope=None
)
Reference: https://arxiv.org/abs/1607.08022
"Instance Normalization: The Missing Ingredient for Fast Stylization"
Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky
Args |
inputs
|
A tensor with 2 or more dimensions, where the first dimension has
batch_size . The normalization is over all but the last dimension if
data_format is NHWC and the second dimension if data_format is
NCHW .
|
center
|
If True, add offset of beta to normalized tensor. If False, beta
is ignored.
|
scale
|
If True, multiply by gamma . If False, gamma is
not used. When the next layer is linear (also e.g. nn.relu ), this can be
disabled since the scaling can be done by the next layer.
|
epsilon
|
Small float added to variance to avoid dividing by zero.
|
activation_fn
|
Activation function, default set to None to skip it and
maintain a linear activation.
|
param_initializers
|
Optional initializers for beta, gamma, moving mean and
moving variance.
|
reuse
|
Whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
|
variables_collections
|
Optional collections for the variables.
|
outputs_collections
|
Collections to add the outputs.
|
trainable
|
If True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable ).
|
data_format
|
A string. NHWC (default) and NCHW are supported.
|
scope
|
Optional scope for variable_scope .
|
Returns |
A Tensor representing the output of the operation.
|
Raises |
ValueError
|
If data_format is neither NHWC nor NCHW .
|
ValueError
|
If the rank of inputs is undefined.
|
ValueError
|
If rank or channels dimension of inputs is undefined.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.layers.instance_norm\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/layers/python/layers/normalization.py#L42-L169) |\n\nFunctional interface for the instance normalization layer. \n\n tf.contrib.layers.instance_norm(\n inputs, center=True, scale=True, epsilon=1e-06, activation_fn=None,\n param_initializers=None, reuse=None, variables_collections=None,\n outputs_collections=None, trainable=True, data_format=DATA_FORMAT_NHWC,\n scope=None\n )\n\nReference: \u003chttps://arxiv.org/abs/1607.08022\u003e\n\n\"Instance Normalization: The Missing Ingredient for Fast Stylization\"\nDmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | A tensor with 2 or more dimensions, where the first dimension has `batch_size`. The normalization is over all but the last dimension if `data_format` is `NHWC` and the second dimension if `data_format` is `NCHW`. |\n| `center` | If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored. |\n| `scale` | If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. [`nn.relu`](/api_docs/python/tf/nn/relu)), this can be disabled since the scaling can be done by the next layer. |\n| `epsilon` | Small float added to variance to avoid dividing by zero. |\n| `activation_fn` | Activation function, default set to None to skip it and maintain a linear activation. |\n| `param_initializers` | Optional initializers for beta, gamma, moving mean and moving variance. |\n| `reuse` | Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. |\n| `variables_collections` | Optional collections for the variables. |\n| `outputs_collections` | Collections to add the outputs. |\n| `trainable` | If `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see [`tf.Variable`](../../../tf/Variable)). |\n| `data_format` | A string. `NHWC` (default) and `NCHW` are supported. |\n| `scope` | Optional scope for `variable_scope`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` representing the output of the operation. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------------------|\n| `ValueError` | If `data_format` is neither `NHWC` nor `NCHW`. |\n| `ValueError` | If the rank of `inputs` is undefined. |\n| `ValueError` | If rank or channels dimension of `inputs` is undefined. |\n\n\u003cbr /\u003e"]]