Dropout consists in randomly setting a fraction rate of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by 1 / (1 - rate), so that their
sum is unchanged at training time and inference time.
Arguments
inputs
Tensor input.
rate
The dropout rate, between 0 and 1. E.g. "rate=0.1" would drop out
10% of input units.
noise_shape
1D tensor of type int32 representing the shape of the
binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape
(batch_size, timesteps, features), and you want the dropout mask
to be the same for all timesteps, you can use
noise_shape=[batch_size, 1, features].
Either a Python boolean, or a TensorFlow boolean scalar tensor
(e.g. a placeholder). Whether to return the output in training mode
(apply dropout) or in inference mode (return the input untouched).
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.layers.dropout\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/layers/core.py#L229-L271) |\n\nApplies Dropout to the input. (deprecated)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.layers.dropout`](/api_docs/python/tf/compat/v1/layers/dropout)\n\n\u003cbr /\u003e\n\n tf.layers.dropout(\n inputs, rate=0.5, noise_shape=None, seed=None, training=False, name=None\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use keras.layers.dropout instead.\n\nDropout consists in randomly setting a fraction `rate` of input units to 0\nat each update during training time, which helps prevent overfitting.\nThe units that are kept are scaled by `1 / (1 - rate)`, so that their\nsum is unchanged at training time and inference time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | Tensor input. |\n| `rate` | The dropout rate, between 0 and 1. E.g. \"rate=0.1\" would drop out 10% of input units. |\n| `noise_shape` | 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`. |\n| `seed` | A Python integer. Used to create random seeds. See [`tf.compat.v1.set_random_seed`](../../tf/random/set_random_seed) for behavior. |\n| `training` | Either a Python boolean, or a TensorFlow boolean scalar tensor (e.g. a placeholder). Whether to return the output in training mode (apply dropout) or in inference mode (return the input untouched). |\n| `name` | The name of the layer (string). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| Output tensor. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|--------------------------------|\n| `ValueError` | if eager execution is enabled. |\n\n\u003cbr /\u003e"]]