None of the supported arguments have changed name.
Before:
dropout=tf.compat.v1.layers.Dropout()
After:
dropout=tf.keras.layers.Dropout()
Description
Dropout consists in randomly setting a fraction rate of input units to 0
at each update during training time, which helps prevent overfitting.
The units that are kept are scaled by 1 / (1 - rate), so that their
sum is unchanged at training time and inference time.
Args
rate
The dropout rate, between 0 and 1. E.g. rate=0.1 would drop out
10% of input units.
noise_shape
1D tensor of type int32 representing the shape of the
binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape
(batch_size, timesteps, features), and you want the dropout mask
to be the same for all timesteps, you can use
noise_shape=[batch_size, 1, features].
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.compat.v1.layers.Dropout\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.9.0/keras/legacy_tf_layers/core.py#L264-L331) |\n\nApplies Dropout to the input.\n\nInherits From: [`Dropout`](../../../../tf/keras/layers/Dropout), [`Layer`](../../../../tf/compat/v1/layers/Layer), [`Layer`](../../../../tf/keras/layers/Layer), [`Module`](../../../../tf/Module) \n\n tf.compat.v1.layers.Dropout(\n rate=0.5, noise_shape=None, seed=None, name=None, **kwargs\n )\n\n\u003cbr /\u003e\n\nMigrate to TF2\n--------------\n\n\u003cbr /\u003e\n\n| **Caution:** This API was designed for TensorFlow v1. Continue reading for details on how to migrate from this API to a native TensorFlow v2 equivalent. See the [TensorFlow v1 to TensorFlow v2 migration guide](https://www.tensorflow.org/guide/migrate) for instructions on how to migrate the rest of your code.\n\nThis API is a legacy api that is only compatible with eager execution and\n[`tf.function`](../../../../tf/function) if you combine it with\n[`tf.compat.v1.keras.utils.track_tf1_style_variables`](../../../../tf/compat/v1/keras/utils/track_tf1_style_variables)\n\nPlease refer to [tf.layers model mapping section of the migration guide](https://www.tensorflow.org/guide/migrate/model_mapping)\nto learn how to use your TensorFlow v1 model in TF2 with Keras.\n\nThe corresponding TensorFlow v2 layer is [`tf.keras.layers.Dropout`](../../../../tf/keras/layers/Dropout).\n\n#### Structural Mapping to Native TF2\n\nNone of the supported arguments have changed name.\n\nBefore: \n\n dropout = tf.compat.v1.layers.Dropout()\n\nAfter: \n\n dropout = tf.keras.layers.Dropout()\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDescription\n-----------\n\nDropout consists in randomly setting a fraction `rate` of input units to 0\nat each update during training time, which helps prevent overfitting.\nThe units that are kept are scaled by `1 / (1 - rate)`, so that their\nsum is unchanged at training time and inference time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `rate` | The dropout rate, between 0 and 1. E.g. `rate=0.1` would drop out 10% of input units. |\n| `noise_shape` | 1D tensor of type `int32` representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape `(batch_size, timesteps, features)`, and you want the dropout mask to be the same for all timesteps, you can use `noise_shape=[batch_size, 1, features]`. |\n| `seed` | A Python integer. Used to create random seeds. See [`tf.compat.v1.set_random_seed`](../../../../tf/compat/v1/set_random_seed). for behavior. |\n| `name` | The name of the layer (string). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------|---------------|\n| `graph` | \u003cbr /\u003e \u003cbr /\u003e |\n| `scope_name` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `apply`\n\n[View source](https://github.com/keras-team/keras/tree/v2.9.0/keras/legacy_tf_layers/base.py#L239-L240) \n\n apply(\n *args, **kwargs\n )\n\n### `get_losses_for`\n\n[View source](https://github.com/keras-team/keras/tree/v2.9.0/keras/engine/base_layer_v1.py#L1341-L1358) \n\n get_losses_for(\n inputs\n )\n\nRetrieves losses relevant to a specific set of inputs.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------|\n| `inputs` | Input tensor or list/tuple of input tensors. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| List of loss tensors of the layer that depend on `inputs`. ||\n\n\u003cbr /\u003e\n\n### `get_updates_for`\n\n[View source](https://github.com/keras-team/keras/tree/v2.9.0/keras/engine/base_layer_v1.py#L1322-L1339) \n\n get_updates_for(\n inputs\n )\n\nRetrieves updates relevant to a specific set of inputs.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------|\n| `inputs` | Input tensor or list/tuple of input tensors. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| List of update ops of the layer that depend on `inputs`. ||\n\n\u003cbr /\u003e"]]