Dense implements the operation:
output = activation(dot(input, kernel) + bias)
where activation is the element-wise activation function
passed as the activation argument, kernel is a weights matrix
created by the layer, and bias is a bias vector created by the layer
(only applicable if use_bias is True). These are all attributes of
Dense.
Besides, layer attributes cannot be modified after the layer has been called
once (except the trainable attribute).
When a popular kwarg input_shape is passed, then keras will create
an input layer to insert before the current layer. This can be treated
equivalent to explicitly defining an InputLayer.
Example:
# Create a `Sequential` model and add a Dense layer as the first layer.model=tf.keras.models.Sequential()model.add(tf.keras.Input(shape=(16,)))model.add(tf.keras.layers.Dense(32,activation='relu'))# Now the model will take as input arrays of shape (None, 16)# and output arrays of shape (None, 32).# Note that after the first layer, you don't need to specify# the size of the input anymore:model.add(tf.keras.layers.Dense(32))model.output_shape(None,32)
Args
units
Positive integer, dimensionality of the output space.
activation
Activation function to use.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
use_bias
Boolean, whether the layer uses a bias vector.
kernel_initializer
Initializer for the kernel weights matrix.
bias_initializer
Initializer for the bias vector.
kernel_regularizer
Regularizer function applied to
the kernel weights matrix.
bias_regularizer
Regularizer function applied to the bias vector.
activity_regularizer
Regularizer function applied to
the output of the layer (its "activation").
kernel_constraint
Constraint function applied to
the kernel weights matrix.
bias_constraint
Constraint function applied to the bias vector.
Input shape
N-D tensor with shape: (batch_size, ..., input_dim).
The most common situation would be
a 2D input with shape (batch_size, input_dim).
Output shape
N-D tensor with shape: (batch_size, ..., units).
For instance, for a 2D input with shape (batch_size, input_dim),
the output would have shape (batch_size, units).
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.layers.Dense\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.13.1/keras/layers/core/dense.py#L33-L301) |\n\nJust your regular densely-connected NN layer.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer), [`Module`](../../../tf/Module)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.layers.Dense\\`\n\n\u003cbr /\u003e\n\n tf.keras.layers.Dense(\n units,\n activation=None,\n use_bias=True,\n kernel_initializer='glorot_uniform',\n bias_initializer='zeros',\n kernel_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n kernel_constraint=None,\n bias_constraint=None,\n **kwargs\n )\n\n`Dense` implements the operation:\n`output = activation(dot(input, kernel) + bias)`\nwhere `activation` is the element-wise activation function\npassed as the `activation` argument, `kernel` is a weights matrix\ncreated by the layer, and `bias` is a bias vector created by the layer\n(only applicable if `use_bias` is `True`). These are all attributes of\n`Dense`.\n| **Note:** If the input to the layer has a rank greater than 2, then `Dense` computes the dot product between the `inputs` and the `kernel` along the last axis of the `inputs` and axis 0 of the `kernel` (using [`tf.tensordot`](../../../tf/tensordot)). For example, if input has dimensions `(batch_size, d0, d1)`, then we create a `kernel` with shape `(d1, units)`, and the `kernel` operates along axis 2 of the `input`, on every sub-tensor of shape `(1, 1, d1)` (there are `batch_size * d0` such sub-tensors). The output in this case will have shape `(batch_size, d0, units)`.\n\nBesides, layer attributes cannot be modified after the layer has been called\nonce (except the `trainable` attribute).\nWhen a popular kwarg `input_shape` is passed, then keras will create\nan input layer to insert before the current layer. This can be treated\nequivalent to explicitly defining an `InputLayer`.\n\n#### Example:\n\n # Create a `Sequential` model and add a Dense layer as the first layer.\n model = tf.keras.models.Sequential()\n model.add(tf.keras.Input(shape=(16,)))\n model.add(tf.keras.layers.Dense(32, activation='relu'))\n # Now the model will take as input arrays of shape (None, 16)\n # and output arrays of shape (None, 32).\n # Note that after the first layer, you don't need to specify\n # the size of the input anymore:\n model.add(tf.keras.layers.Dense(32))\n model.output_shape\n (None, 32)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|----------------------------------------------------------------------------------------------------------------------------|\n| `units` | Positive integer, dimensionality of the output space. |\n| `activation` | Activation function to use. If you don't specify anything, no activation is applied (ie. \"linear\" activation: `a(x) = x`). |\n| `use_bias` | Boolean, whether the layer uses a bias vector. |\n| `kernel_initializer` | Initializer for the `kernel` weights matrix. |\n| `bias_initializer` | Initializer for the bias vector. |\n| `kernel_regularizer` | Regularizer function applied to the `kernel` weights matrix. |\n| `bias_regularizer` | Regularizer function applied to the bias vector. |\n| `activity_regularizer` | Regularizer function applied to the output of the layer (its \"activation\"). |\n| `kernel_constraint` | Constraint function applied to the `kernel` weights matrix. |\n| `bias_constraint` | Constraint function applied to the bias vector. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Input shape ----------- ||\n|---|---|\n| N-D tensor with shape: `(batch_size, ..., input_dim)`. The most common situation would be a 2D input with shape `(batch_size, input_dim)`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Output shape ------------ ||\n|---|---|\n| N-D tensor with shape: `(batch_size, ..., units)`. For instance, for a 2D input with shape `(batch_size, input_dim)`, the output would have shape `(batch_size, units)`. ||\n\n\u003cbr /\u003e"]]