tf.keras.layers.Dropout
Stay organized with collections
Save and categorize content based on your preferences.
Applies Dropout to the input.
Inherits From: Layer
, Module
tf.keras.layers.Dropout(
rate, noise_shape=None, seed=None, **kwargs
)
The Dropout layer randomly sets input units to 0 with a frequency of rate
at each step during training time, which helps prevent overfitting.
Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over
all inputs is unchanged.
Note that the Dropout layer only applies when training
is set to True
such that no values are dropped during inference. When using model.fit
,
training
will be appropriately set to True automatically, and in other
contexts, you can set the kwarg explicitly to True when calling the layer.
(This is in contrast to setting trainable=False
for a Dropout layer.
trainable
does not affect the layer's behavior, as Dropout does
not have any variables/weights that can be frozen during training.)
tf.random.set_seed(0)
layer = tf.keras.layers.Dropout(.2, input_shape=(2,))
data = np.arange(10).reshape(5, 2).astype(np.float32)
print(data)
[[0. 1.]
[2. 3.]
[4. 5.]
[6. 7.]
[8. 9.]]
outputs = layer(data, training=True)
print(outputs)
tf.Tensor(
[[ 0. 1.25]
[ 2.5 3.75]
[ 5. 6.25]
[ 7.5 8.75]
[10. 0. ]], shape=(5, 2), dtype=float32)
Arguments |
rate
|
Float between 0 and 1. Fraction of the input units to drop.
|
noise_shape
|
1D integer tensor representing the shape of the
binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape
(batch_size, timesteps, features) and
you want the dropout mask to be the same for all timesteps,
you can use noise_shape=(batch_size, 1, features) .
|
seed
|
A Python integer to use as random seed.
|
Call arguments:
inputs
: Input tensor (of any rank).
training
: Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (doing nothing).
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-02-18 UTC.
[null,null,["Last updated 2021-02-18 UTC."],[],[],null,["# tf.keras.layers.Dropout\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/layers/Dropout) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.4.0/tensorflow/python/keras/layers/core.py#L144-L244) |\n\nApplies Dropout to the input.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer), [`Module`](../../../tf/Module)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.layers.Dropout`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout)\n\n\u003cbr /\u003e\n\n tf.keras.layers.Dropout(\n rate, noise_shape=None, seed=None, **kwargs\n )\n\nThe Dropout layer randomly sets input units to 0 with a frequency of `rate`\nat each step during training time, which helps prevent overfitting.\nInputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over\nall inputs is unchanged.\n\nNote that the Dropout layer only applies when `training` is set to True\nsuch that no values are dropped during inference. When using `model.fit`,\n`training` will be appropriately set to True automatically, and in other\ncontexts, you can set the kwarg explicitly to True when calling the layer.\n\n(This is in contrast to setting `trainable=False` for a Dropout layer.\n`trainable` does not affect the layer's behavior, as Dropout does\nnot have any variables/weights that can be frozen during training.) \n\n tf.random.set_seed(0)\n layer = tf.keras.layers.Dropout(.2, input_shape=(2,))\n data = np.arange(10).reshape(5, 2).astype(np.float32)\n print(data)\n [[0. 1.]\n [2. 3.]\n [4. 5.]\n [6. 7.]\n [8. 9.]]\n outputs = layer(data, training=True)\n print(outputs)\n tf.Tensor(\n [[ 0. 1.25]\n [ 2.5 3.75]\n [ 5. 6.25]\n [ 7.5 8.75]\n [10. 0. ]], shape=(5, 2), dtype=float32)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `rate` | Float between 0 and 1. Fraction of the input units to drop. |\n| `noise_shape` | 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)` and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=(batch_size, 1, features)`. |\n| `seed` | A Python integer to use as random seed. |\n\n\u003cbr /\u003e\n\n#### Call arguments:\n\n- **`inputs`**: Input tensor (of any rank).\n- **`training`**: Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing)."]]