tf.keras.activations.elu
Stay organized with collections
Save and categorize content based on your preferences.
Exponential Linear Unit.
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.activations.elu`
tf.keras.activations.elu(
x, alpha=1.0
)
The exponential linear unit (ELU) with alpha > 0
is:
x
if x > 0
and
alpha * (exp(x) - 1)
if x < 0
The ELU hyperparameter alpha
controls the value to which an
ELU saturates for negative net inputs. ELUs diminish the
vanishing gradient effect.
ELUs have negative values which pushes the mean of the activations
closer to zero.
Mean activations that are closer to zero enable faster learning as they
bring the gradient closer to the natural gradient.
ELUs saturate to a negative value when the argument gets smaller.
Saturation means a small derivative which decreases the variation
and the information that is propagated to the next layer.
Example Usage:
import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu',
input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))
Args |
x
|
Input tensor.
|
alpha
|
A scalar, slope of negative section. alpha controls the value
to which an ELU saturates for negative net inputs.
|
Returns |
The exponential linear unit (ELU) activation function: x if x > 0
and alpha * (exp(x) - 1) if x < 0 .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.activations.elu\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.13.1/keras/activations.py#L105-L152) |\n\nExponential Linear Unit.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.activations.elu\\`\n\n\u003cbr /\u003e\n\n tf.keras.activations.elu(\n x, alpha=1.0\n )\n\nThe exponential linear unit (ELU) with `alpha \u003e 0` is:\n`x` if `x \u003e 0` and\n`alpha * (exp(x) - 1)` if `x \u003c 0`\nThe ELU hyperparameter `alpha` controls the value to which an\nELU saturates for negative net inputs. ELUs diminish the\nvanishing gradient effect.\n\nELUs have negative values which pushes the mean of the activations\ncloser to zero.\nMean activations that are closer to zero enable faster learning as they\nbring the gradient closer to the natural gradient.\nELUs saturate to a negative value when the argument gets smaller.\nSaturation means a small derivative which decreases the variation\nand the information that is propagated to the next layer.\n\n#### Example Usage:\n\n import tensorflow as tf\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu',\n input_shape=(28, 28, 1)))\n model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n model.add(tf.keras.layers.MaxPooling2D((2, 2)))\n model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|--------------------------------------------------------------------------------------------------------------------|\n| `x` | Input tensor. |\n| `alpha` | A scalar, slope of negative section. `alpha` controls the value to which an ELU saturates for negative net inputs. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The exponential linear unit (ELU) activation function: `x` if `x \u003e 0` and `alpha * (exp(x) - 1)` if `x \u003c 0`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Reference --------- ||\n|---|---|\n| \u003cbr /\u003e - [Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) (Clevert et al, 2016)](https://arxiv.org/abs/1511.07289) ||\n\n\u003cbr /\u003e"]]