tf.compat.v1.layers.Conv2DTranspose
Stay organized with collections
Save and categorize content based on your preferences.
Transposed 2D convolution layer (sometimes called 2D Deconvolution).
Inherits From: Conv2DTranspose
, Conv2D
, Layer
, Layer
, Module
tf.compat.v1.layers.Conv2DTranspose(
filters, kernel_size, strides=(1, 1), padding='valid',
data_format='channels_last', activation=None, use_bias=True,
kernel_initializer=None, bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,
kernel_constraint=None, bias_constraint=None, trainable=True, name=None,
**kwargs
)
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
Args |
filters
|
Integer, the dimensionality of the output space (i.e. the number
of filters in the convolution).
|
kernel_size
|
A tuple or list of 2 positive integers specifying the spatial
dimensions of the filters. Can be a single integer to specify the same
value for all spatial dimensions.
|
strides
|
A tuple or list of 2 positive integers specifying the strides
of the convolution. Can be a single integer to specify the same value for
all spatial dimensions.
|
padding
|
one of "valid" or "same" (case-insensitive).
"valid" means no padding. "same" results in padding evenly to
the left/right or up/down of the input such that output has the same
height/width dimension as the input.
|
data_format
|
A string, one of channels_last (default) or channels_first .
The ordering of the dimensions in the inputs.
channels_last corresponds to inputs with shape
(batch, height, width, channels) while channels_first corresponds to
inputs with shape (batch, channels, height, width) .
|
activation
|
Activation function. Set it to None to maintain a
linear activation.
|
use_bias
|
Boolean, whether the layer uses a bias.
|
kernel_initializer
|
An initializer for the convolution kernel.
|
bias_initializer
|
An initializer for the bias vector. If None, the default
initializer will be used.
|
kernel_regularizer
|
Optional regularizer for the convolution kernel.
|
bias_regularizer
|
Optional regularizer for the bias vector.
|
activity_regularizer
|
Optional regularizer function for the output.
|
kernel_constraint
|
Optional projection function to be applied to the
kernel after being updated by an Optimizer (e.g. used to implement
norm constraints or value constraints for layer weights). The function
must take as input the unprojected variable and must return the
projected variable (which must have the same shape). Constraints are
not safe to use when doing asynchronous distributed training.
|
bias_constraint
|
Optional projection function to be applied to the
bias after being updated by an Optimizer .
|
trainable
|
Boolean, if True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable ).
|
name
|
A string, the name of the layer.
|
Attributes |
graph
|
|
scope_name
|
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[null,null,["Last updated 2021-05-14 UTC."],[],[],null,["# tf.compat.v1.layers.Conv2DTranspose\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/legacy_tf_layers/convolutional.py#L1131-L1215) |\n\nTransposed 2D convolution layer (sometimes called 2D Deconvolution).\n\nInherits From: [`Conv2DTranspose`](../../../../tf/keras/layers/Conv2DTranspose), [`Conv2D`](../../../../tf/keras/layers/Conv2D), [`Layer`](../../../../tf/compat/v1/layers/Layer), [`Layer`](../../../../tf/keras/layers/Layer), [`Module`](../../../../tf/Module) \n\n tf.compat.v1.layers.Conv2DTranspose(\n filters, kernel_size, strides=(1, 1), padding='valid',\n data_format='channels_last', activation=None, use_bias=True,\n kernel_initializer=None, bias_initializer=tf.zeros_initializer(),\n kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None,\n kernel_constraint=None, bias_constraint=None, trainable=True, name=None,\n **kwargs\n )\n\nThe need for transposed convolutions generally arises\nfrom the desire to use a transformation going in the opposite direction\nof a normal convolution, i.e., from something that has the shape of the\noutput of some convolution to something that has the shape of its input\nwhile maintaining a connectivity pattern that is compatible with\nsaid convolution.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `filters` | Integer, the dimensionality of the output space (i.e. the number of filters in the convolution). |\n| `kernel_size` | A tuple or list of 2 positive integers specifying the spatial dimensions of the filters. Can be a single integer to specify the same value for all spatial dimensions. |\n| `strides` | A tuple or list of 2 positive integers specifying the strides of the convolution. Can be a single integer to specify the same value for all spatial dimensions. |\n| `padding` | one of `\"valid\"` or `\"same\"` (case-insensitive). `\"valid\"` means no padding. `\"same\"` results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input. |\n| `data_format` | A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, height, width, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, height, width)`. |\n| `activation` | Activation function. Set it to None to maintain a linear activation. |\n| `use_bias` | Boolean, whether the layer uses a bias. |\n| `kernel_initializer` | An initializer for the convolution kernel. |\n| `bias_initializer` | An initializer for the bias vector. If None, the default initializer will be used. |\n| `kernel_regularizer` | Optional regularizer for the convolution kernel. |\n| `bias_regularizer` | Optional regularizer for the bias vector. |\n| `activity_regularizer` | Optional regularizer function for the output. |\n| `kernel_constraint` | Optional projection function to be applied to the kernel after being updated by an `Optimizer` (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. |\n| `bias_constraint` | Optional projection function to be applied to the bias after being updated by an `Optimizer`. |\n| `trainable` | Boolean, if `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see [`tf.Variable`](../../../../tf/Variable)). |\n| `name` | A string, the name of the layer. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------|---------------|\n| `graph` | \u003cbr /\u003e \u003cbr /\u003e |\n| `scope_name` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e"]]