tf.keras.activations.selu
Stay organized with collections
Save and categorize content based on your preferences.
Scaled Exponential Linear Unit (SELU).
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.activations.selu`
tf.keras.activations.selu(
x
)
The Scaled Exponential Linear Unit (SELU) activation function is defined as:
if x > 0: return scale * x
if x < 0: return scale * alpha * (exp(x) - 1)
where alpha
and scale
are pre-defined constants
(alpha=1.67326324
and scale=1.05070098
).
Basically, the SELU activation function multiplies scale
(> 1) with the
output of the tf.keras.activations.elu
function to ensure a slope larger
than one for positive inputs.
The values of alpha
and scale
are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see tf.keras.initializers.LecunNormal
initializer)
and the number of input units is "large enough"
(see reference paper for more information).
Example Usage:
num_classes = 10 # 10-class problem
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',
activation='selu'))
model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
Args |
x
|
A tensor or variable to compute the activation function for.
|
Returns |
The scaled exponential unit activation: scale * elu(x, alpha) .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.activations.selu\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.13.1/keras/activations.py#L155-L206) |\n\nScaled Exponential Linear Unit (SELU).\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.activations.selu\\`\n\n\u003cbr /\u003e\n\n tf.keras.activations.selu(\n x\n )\n\nThe Scaled Exponential Linear Unit (SELU) activation function is defined as:\n\n- `if x \u003e 0: return scale * x`\n- `if x \u003c 0: return scale * alpha * (exp(x) - 1)`\n\nwhere `alpha` and `scale` are pre-defined constants\n(`alpha=1.67326324` and `scale=1.05070098`).\n\nBasically, the SELU activation function multiplies `scale` (\\\u003e 1) with the\noutput of the [`tf.keras.activations.elu`](../../../tf/keras/activations/elu) function to ensure a slope larger\nthan one for positive inputs.\n\nThe values of `alpha` and `scale` are\nchosen so that the mean and variance of the inputs are preserved\nbetween two consecutive layers as long as the weights are initialized\ncorrectly (see [`tf.keras.initializers.LecunNormal`](../../../tf/keras/initializers/LecunNormal) initializer)\nand the number of input units is \"large enough\"\n(see reference paper for more information).\n\n#### Example Usage:\n\n num_classes = 10 # 10-class problem\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal',\n activation='selu'))\n model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal',\n activation='selu'))\n model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal',\n activation='selu'))\n model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----|--------------------------------------------------------------|\n| `x` | A tensor or variable to compute the activation function for. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The scaled exponential unit activation: `scale * elu(x, alpha)`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Notes ----- ||\n|---|---|\n| \u003cbr /\u003e - To be used together with the [`tf.keras.initializers.LecunNormal`](../../../tf/keras/initializers/LecunNormal) initializer. - To be used together with the dropout variant [`tf.keras.layers.AlphaDropout`](../../../tf/keras/layers/AlphaDropout) (not regular dropout). ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| References ---------- ||\n|---|---|\n| \u003cbr /\u003e - [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515) ||\n\n\u003cbr /\u003e"]]