|View source on GitHub|
Just your regular densely-connected NN layer.
Used in the guide:
- Better performance with tf.function and AutoGraph
- Distributed training with TensorFlow
- Eager execution
- Keras custom callbacks
- Keras overview
- Migrate your TensorFlow 1 code to TensorFlow 2
- Recurrent Neural Networks (RNN) with Keras
- Save and serialize models with Keras
- The Keras functional API in TensorFlow
- Train and evaluate with Keras
- Training checkpoints
- Use a GPU
- Writing custom layers and models with Keras
- tf.data: Build TensorFlow input pipelines
Used in the tutorials:
- Basic classification: Predict an image of clothing
- Basic regression: Predict fuel efficiency
- Classification on imbalanced data
- Classify structured data with feature columns
- Convolutional Neural Network (CNN)
- Convolutional Variational Autoencoder
- Create an Estimator from a Keras model
- Custom layers
- Custom training: walkthrough
- Deep Convolutional Generative Adversarial Network
- Distributed training with Keras
- Explore overfit and underfit
- Image captioning with visual attention
- Image classification
- Load CSV data
- Load NumPy data
- Load a pandas.DataFrame
- Load text
- Multi-worker training with Estimator
- Multi-worker training with Keras
- Neural machine translation with attention
- Save and load a model using a distribution strategy
- Save and load models
- TensorFlow 2.0 quickstart for beginners
- TensorFlow 2.0 quickstart for experts
- Text classification with TensorFlow Hub: Movie reviews
- Text classification with an RNN
- Text classification with preprocessed text: Movie reviews
- Text generation with an RNN
- Time series forecasting
- Transfer learning with TensorFlow Hub
- Transformer model for language understanding
- Word embeddings
Dense implements the operation:
output = activation(dot(input, kernel) + bias)
activation is the element-wise activation function
passed as the
kernel is a weights matrix
created by the layer, and
bias is a bias vector created by the layer
(only applicable if
# as first layer in a sequential model: model = Sequential() model.add(Dense(32, input_shape=(16,))) # now the model will take as input arrays of shape (*, 16) # and output arrays of shape (*, 32) # after the first layer, you don't need to specify # the size of the input anymore: model.add(Dense(32))
units: Positive integer, dimensionality of the output space.
activation: Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation:
a(x) = x).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the
bias_initializer: Initializer for the bias vector.
kernel_regularizer: Regularizer function applied to the
bias_regularizer: Regularizer function applied to the bias vector.
activity_regularizer: Regularizer function applied to the output of the layer (its "activation")..
kernel_constraint: Constraint function applied to the
bias_constraint: Constraint function applied to the bias vector.
N-D tensor with shape:
(batch_size, ..., input_dim).
The most common situation would be
a 2D input with shape
N-D tensor with shape:
(batch_size, ..., units).
For instance, for a 2D input with shape
the output would have shape
__init__( units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs )