tf.contrib.layers.conv2d_in_plane
Stay organized with collections
Save and categorize content based on your preferences.
Performs the same in-plane convolution to each channel independently.
View aliases
Main aliases
`tf.contrib.layers.convolution2d_in_plane`
tf.contrib.layers.conv2d_in_plane(
inputs, kernel_size, stride=1, padding='SAME', activation_fn=tf.nn.relu,
normalizer_fn=None, normalizer_params=None,
weights_initializer=initializers.xavier_initializer(), weights_regularizer=None,
biases_initializer=tf.zeros_initializer(), biases_regularizer=None, reuse=None,
variables_collections=None, outputs_collections=None, trainable=True, scope=None
)
This is useful for performing various simple channel-independent convolution
operations such as image gradients:
image = tf.constant(..., shape=(16, 240, 320, 3))
vert_gradients = layers.conv2d_in_plane(image,
kernel=[1, -1],
kernel_size=[2, 1])
horz_gradients = layers.conv2d_in_plane(image,
kernel=[1, -1],
kernel_size=[1, 2])
Args |
inputs
|
A 4-D tensor with dimensions [batch_size, height, width, channels].
|
kernel_size
|
A list of length 2 holding the [kernel_height, kernel_width] of
of the pooling. Can be an int if both values are the same.
|
stride
|
A list of length 2 [stride_height, stride_width] . Can be an int if
both strides are the same. Note that presently both strides must have the
same value.
|
padding
|
The padding type to use, either 'SAME' or 'VALID'.
|
activation_fn
|
Activation function. The default value is a ReLU function.
Explicitly set it to None to skip it and maintain a linear activation.
|
normalizer_fn
|
Normalization function to use instead of biases . If
normalizer_fn is provided then biases_initializer and
biases_regularizer are ignored and biases are not created nor added.
default set to None for no normalizer function
|
normalizer_params
|
Normalization function parameters.
|
weights_initializer
|
An initializer for the weights.
|
weights_regularizer
|
Optional regularizer for the weights.
|
biases_initializer
|
An initializer for the biases. If None skip biases.
|
biases_regularizer
|
Optional regularizer for the biases.
|
reuse
|
Whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
|
variables_collections
|
Optional list of collections for all the variables or
a dictionary containing a different list of collection per variable.
|
outputs_collections
|
Collection to add the outputs.
|
trainable
|
If True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
|
scope
|
Optional scope for variable_scope .
|
Returns |
A Tensor representing the output of the operation.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.layers.conv2d_in_plane\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/layers/python/layers/layers.py#L1211-L1314) |\n\nPerforms the same in-plane convolution to each channel independently.\n\n#### View aliases\n\n\n**Main aliases**\n\n\\`tf.contrib.layers.convolution2d_in_plane\\`\n\n\u003cbr /\u003e\n\n tf.contrib.layers.conv2d_in_plane(\n inputs, kernel_size, stride=1, padding='SAME', activation_fn=tf.nn.relu,\n normalizer_fn=None, normalizer_params=None,\n weights_initializer=initializers.xavier_initializer(), weights_regularizer=None,\n biases_initializer=tf.zeros_initializer(), biases_regularizer=None, reuse=None,\n variables_collections=None, outputs_collections=None, trainable=True, scope=None\n )\n\nThis is useful for performing various simple channel-independent convolution\noperations such as image gradients:\n\nimage = tf.constant(..., shape=(16, 240, 320, 3))\nvert_gradients = layers.conv2d_in_plane(image,\nkernel=\\[1, -1\\],\nkernel_size=\\[2, 1\\])\nhorz_gradients = layers.conv2d_in_plane(image,\nkernel=\\[1, -1\\],\nkernel_size=\\[1, 2\\])\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | A 4-D tensor with dimensions \\[batch_size, height, width, channels\\]. |\n| `kernel_size` | A list of length 2 holding the \\[kernel_height, kernel_width\\] of of the pooling. Can be an int if both values are the same. |\n| `stride` | A list of length 2 `[stride_height, stride_width]`. Can be an int if both strides are the same. Note that presently both strides must have the same value. |\n| `padding` | The padding type to use, either 'SAME' or 'VALID'. |\n| `activation_fn` | Activation function. The default value is a ReLU function. Explicitly set it to None to skip it and maintain a linear activation. |\n| `normalizer_fn` | Normalization function to use instead of `biases`. If `normalizer_fn` is provided then `biases_initializer` and `biases_regularizer` are ignored and `biases` are not created nor added. default set to None for no normalizer function |\n| `normalizer_params` | Normalization function parameters. |\n| `weights_initializer` | An initializer for the weights. |\n| `weights_regularizer` | Optional regularizer for the weights. |\n| `biases_initializer` | An initializer for the biases. If None skip biases. |\n| `biases_regularizer` | Optional regularizer for the biases. |\n| `reuse` | Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. |\n| `variables_collections` | Optional list of collections for all the variables or a dictionary containing a different list of collection per variable. |\n| `outputs_collections` | Collection to add the outputs. |\n| `trainable` | If `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). |\n| `scope` | Optional scope for `variable_scope`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` representing the output of the operation. ||\n\n\u003cbr /\u003e"]]