Dense implements the operation:
output = activation(dot(input, kernel) + bias)
where activation is the element-wise activation function
passed as the activation argument, kernel is a weights matrix
created by the layer, and bias is a bias vector created by the layer
(only applicable if use_bias is True).
Besides, layer attributes cannot be modified after the layer has been called
once (except the trainable attribute).
Example:
# Create a `Sequential` model and add a Dense layer as the first layer.model=tf.keras.models.Sequential()model.add(tf.keras.Input(shape=(16,)))model.add(tf.keras.layers.Dense(32,activation='relu'))# Now the model will take as input arrays of shape (None, 16)# and output arrays of shape (None, 32).# Note that after the first layer, you don't need to specify# the size of the input anymore:model.add(tf.keras.layers.Dense(32))model.output_shape(None,32)
Arguments
units
Positive integer, dimensionality of the output space.
activation
Activation function to use.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
use_bias
Boolean, whether the layer uses a bias vector.
kernel_initializer
Initializer for the kernel weights matrix.
bias_initializer
Initializer for the bias vector.
kernel_regularizer
Regularizer function applied to
the kernel weights matrix.
bias_regularizer
Regularizer function applied to the bias vector.
activity_regularizer
Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint
Constraint function applied to
the kernel weights matrix.
bias_constraint
Constraint function applied to the bias vector.
Input shape:
N-D tensor with shape: (batch_size, ..., input_dim).
The most common situation would be
a 2D input with shape (batch_size, input_dim).
Output shape:
N-D tensor with shape: (batch_size, ..., units).
For instance, for a 2D input with shape (batch_size, input_dim),
the output would have shape (batch_size, units).
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.layers.Dense\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/layers/Dense) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/keras/layers/core.py#L1067-L1233) |\n\nJust your regular densely-connected NN layer.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.layers.Dense`](/api_docs/python/tf/keras/layers/Dense)\n\n\u003cbr /\u003e\n\n tf.keras.layers.Dense(\n units, activation=None, use_bias=True, kernel_initializer='glorot_uniform',\n bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None,\n activity_regularizer=None, kernel_constraint=None, bias_constraint=None,\n **kwargs\n )\n\n`Dense` implements the operation:\n`output = activation(dot(input, kernel) + bias)`\nwhere `activation` is the element-wise activation function\npassed as the `activation` argument, `kernel` is a weights matrix\ncreated by the layer, and `bias` is a bias vector created by the layer\n(only applicable if `use_bias` is `True`).\n| **Note:** If the input to the layer has a rank greater than 2, then `Dense` computes the dot product between the `inputs` and the `kernel` along the last axis of the `inputs` and axis 1 of the `kernel` (using [`tf.tensordot`](../../../tf/tensordot)). For example, if input has dimensions `(batch_size, d0, d1)`, then we create a `kernel` with shape `(d1, units)`, and the `kernel` operates along axis 2 of the `input`, on every sub-tensor of shape `(1, 1, d1)` (there are `batch_size * d0` such sub-tensors). The output in this case will have shape `(batch_size, d0, units)`.\n\nBesides, layer attributes cannot be modified after the layer has been called\nonce (except the `trainable` attribute).\n\n#### Example:\n\n # Create a `Sequential` model and add a Dense layer as the first layer.\n model = tf.keras.models.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(32, activation='relu'))\n # Now the model will take as input arrays of shape (None, 16)\n # and output arrays of shape (None, 32).\n # Note that after the first layer, you don't need to specify\n # the size of the input anymore:\n model.add(tf.keras.layers.Dense(32))\n model.output_shape\n (None, 32)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|------------------------|----------------------------------------------------------------------------------------------------------------------------|\n| `units` | Positive integer, dimensionality of the output space. |\n| `activation` | Activation function to use. If you don't specify anything, no activation is applied (ie. \"linear\" activation: `a(x) = x`). |\n| `use_bias` | Boolean, whether the layer uses a bias vector. |\n| `kernel_initializer` | Initializer for the `kernel` weights matrix. |\n| `bias_initializer` | Initializer for the bias vector. |\n| `kernel_regularizer` | Regularizer function applied to the `kernel` weights matrix. |\n| `bias_regularizer` | Regularizer function applied to the bias vector. |\n| `activity_regularizer` | Regularizer function applied to the output of the layer (its \"activation\"). |\n| `kernel_constraint` | Constraint function applied to the `kernel` weights matrix. |\n| `bias_constraint` | Constraint function applied to the bias vector. |\n\n\u003cbr /\u003e\n\n#### Input shape:\n\nN-D tensor with shape: `(batch_size, ..., input_dim)`.\nThe most common situation would be\na 2D input with shape `(batch_size, input_dim)`.\n\n#### Output shape:\n\nN-D tensor with shape: `(batch_size, ..., units)`.\nFor instance, for a 2D input with shape `(batch_size, input_dim)`,\nthe output would have shape `(batch_size, units)`."]]