fully_connected creates a variable called weights, representing a fully
connected weight matrix, which is multiplied by the inputs to produce a
Tensor of hidden units. If a normalizer_fn is provided (such as
batch_norm), it is then applied. Otherwise, if normalizer_fn is
None and a biases_initializer is provided then a biases variable would be
created and added the hidden units. Finally, if activation_fn is not None,
it is applied to the hidden units as well.
Args
inputs
A tensor of at least rank 2 and static value for the last dimension;
i.e. [batch_size, depth], [None, None, None, channels].
num_outputs
Integer or long, the number of output units in the layer.
activation_fn
Activation function. The default value is a ReLU function.
Explicitly set it to None to skip it and maintain a linear activation.
normalizer_fn
Normalization function to use instead of biases. If
normalizer_fn is provided then biases_initializer and
biases_regularizer are ignored and biases are not created nor added.
default set to None for no normalizer function
normalizer_params
Normalization function parameters.
weights_initializer
An initializer for the weights.
weights_regularizer
Optional regularizer for the weights.
biases_initializer
An initializer for the biases. If None skip biases.
biases_regularizer
Optional regularizer for the biases.
reuse
Whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
variables_collections
Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
outputs_collections
Collection to add the outputs.
trainable
If True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
scope
Optional scope for variable_scope.
Returns
The tensor variable representing the result of the series of operations.
Raises
ValueError
If x has rank less than 2 or if its last dimension is not set.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.model_pruning.masked_fully_connected\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/model_pruning/python/layers/layers.py#L258-L363) |\n\nAdds a sparse fully connected layer. The weight matrix is masked. \n\n tf.contrib.model_pruning.masked_fully_connected(\n inputs, num_outputs, activation_fn=tf.nn.relu, normalizer_fn=None,\n normalizer_params=None, weights_initializer=initializers.xavier_initializer(),\n weights_regularizer=None, biases_initializer=tf.zeros_initializer(),\n biases_regularizer=None, reuse=None, variables_collections=None,\n outputs_collections=None, trainable=True, scope=None\n )\n\n`fully_connected` creates a variable called `weights`, representing a fully\nconnected weight matrix, which is multiplied by the `inputs` to produce a\n`Tensor` of hidden units. If a `normalizer_fn` is provided (such as\n`batch_norm`), it is then applied. Otherwise, if `normalizer_fn` is\nNone and a `biases_initializer` is provided then a `biases` variable would be\ncreated and added the hidden units. Finally, if `activation_fn` is not `None`,\nit is applied to the hidden units as well.\n| **Note:** that if `inputs` have a rank greater than 2, then `inputs` is flattened prior to the initial matrix multiply by `weights`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | A tensor of at least rank 2 and static value for the last dimension; i.e. `[batch_size, depth]`, `[None, None, None, channels]`. |\n| `num_outputs` | Integer or long, the number of output units in the layer. |\n| `activation_fn` | Activation function. The default value is a ReLU function. Explicitly set it to None to skip it and maintain a linear activation. |\n| `normalizer_fn` | Normalization function to use instead of `biases`. If `normalizer_fn` is provided then `biases_initializer` and `biases_regularizer` are ignored and `biases` are not created nor added. default set to None for no normalizer function |\n| `normalizer_params` | Normalization function parameters. |\n| `weights_initializer` | An initializer for the weights. |\n| `weights_regularizer` | Optional regularizer for the weights. |\n| `biases_initializer` | An initializer for the biases. If None skip biases. |\n| `biases_regularizer` | Optional regularizer for the biases. |\n| `reuse` | Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given. |\n| `variables_collections` | Optional list of collections for all the variables or a dictionary containing a different list of collections per variable. |\n| `outputs_collections` | Collection to add the outputs. |\n| `trainable` | If `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable). |\n| `scope` | Optional scope for variable_scope. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The tensor variable representing the result of the series of operations. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------------|\n| `ValueError` | If x has rank less than 2 or if its last dimension is not set. |\n\n\u003cbr /\u003e"]]