tf.compat.v1.layers.Dropout
Stay organized with collections
Save and categorize content based on your preferences.
Applies Dropout to the input.
Inherits From: Dropout
, Layer
, Layer
, Module
tf.compat.v1.layers.Dropout(
rate=0.5, noise_shape=None, seed=None, name=None, **kwargs
)
Dropout consists in randomly setting a fraction rate
of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by 1 / (1 - rate)
, so that their
sum is unchanged at training time and inference time.
Args |
rate
|
The dropout rate, between 0 and 1. E.g. rate=0.1 would drop out
10% of input units.
|
noise_shape
|
1D tensor of type int32 representing the shape of the
binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape
(batch_size, timesteps, features) , and you want the dropout mask
to be the same for all timesteps, you can use
noise_shape=[batch_size, 1, features] .
|
seed
|
A Python integer. Used to create random seeds. See
tf.compat.v1.set_random_seed .
for behavior.
|
name
|
The name of the layer (string).
|
Attributes |
graph
|
|
scope_name
|
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[null,null,["Last updated 2021-05-14 UTC."],[],[],null,["# tf.compat.v1.layers.Dropout\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/keras/legacy_tf_layers/core.py#L192-L227) |\n\nApplies Dropout to the input.\n\nInherits From: [`Dropout`](../../../../tf/keras/layers/Dropout), [`Layer`](../../../../tf/compat/v1/layers/Layer), [`Layer`](../../../../tf/keras/layers/Layer), [`Module`](../../../../tf/Module) \n\n tf.compat.v1.layers.Dropout(\n rate=0.5, noise_shape=None, seed=None, name=None, **kwargs\n )\n\nDropout consists in randomly setting a fraction `rate` of input units to 0\nat each update during training time, which helps prevent overfitting.\nThe units that are kept are scaled by `1 / (1 - rate)`, so that their\nsum is unchanged at training time and inference time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `rate` | The dropout rate, between 0 and 1. E.g. `rate=0.1` would drop out 10% of input units. |\n| `noise_shape` | 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`. |\n| `seed` | A Python integer. Used to create random seeds. See [`tf.compat.v1.set_random_seed`](../../../../tf/compat/v1/set_random_seed). for behavior. |\n| `name` | The name of the layer (string). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------|---------------|\n| `graph` | \u003cbr /\u003e \u003cbr /\u003e |\n| `scope_name` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e"]]