tf.keras.activations.selu
Stay organized with collections
Save and categorize content based on your preferences.
Scaled Exponential Linear Unit (SELU).
tf.keras.activations.selu(
x
)
The Scaled Exponential Linear Unit (SELU) activation function is:
scale * x
if x > 0
and scale * alpha * (exp(x) - 1)
if x < 0
where alpha
and scale
are pre-defined constants
(alpha = 1.67326324
and scale = 1.05070098
).
The SELU activation function multiplies scale
> 1 with the
[elu](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations/elu)
(Exponential Linear Unit (ELU)) to ensure a slope larger than one
for positive net inputs.
The values of alpha
and scale
are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see lecun_normal
initialization)
and the number of inputs is "large enough"
(see references for more information).
(Courtesy: Blog on Towards DataScience at
https://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9)
Example Usage:
n_classes = 10 #10_class problem
from tensorflow.python.keras.layers import Dense
model = tf.keras.Sequential()
model.add(Dense(64, kernel_initializer='lecun_normal',
activation='selu', input_shape=(28, 28, 1)))
model.add(Dense(32, kernel_initializer='lecun_normal',
activation='selu'))
model.add(Dense(16, kernel_initializer='lecun_normal',
activation='selu'))
model.add(Dense(n_classes, activation='softmax'))
Arguments |
x
|
A tensor or variable to compute the activation function for.
|
Returns |
The scaled exponential unit activation: scale * elu(x, alpha) .
|
Note
- To be used together with the initialization "[lecun_normal]
(https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)".
- To be used together with the dropout variant "[AlphaDropout]
(https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout)".
References:
Self-Normalizing Neural Networks (Klambauer et al, 2017)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.activations.selu\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/activations/selu) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/python/keras/activations.py#L101-L156) |\n\nScaled Exponential Linear Unit (SELU).\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.activations.selu`](/api_docs/python/tf/keras/activations/selu)\n\n\u003cbr /\u003e\n\n tf.keras.activations.selu(\n x\n )\n\nThe Scaled Exponential Linear Unit (SELU) activation function is:\n`scale * x` if `x \u003e 0` and `scale * alpha * (exp(x) - 1)` if `x \u003c 0`\nwhere `alpha` and `scale` are pre-defined constants\n(`alpha = 1.67326324`\nand `scale = 1.05070098`).\nThe SELU activation function multiplies `scale` \\\u003e 1 with the\n`[elu](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/activations/elu)`\n(Exponential Linear Unit (ELU)) to ensure a slope larger than one\nfor positive net inputs.\n\nThe values of `alpha` and `scale` are\nchosen so that the mean and variance of the inputs are preserved\nbetween two consecutive layers as long as the weights are initialized\ncorrectly (see [`lecun_normal` initialization](https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal))\nand the number of inputs is \"large enough\"\n(see references for more information).\n\n\n(Courtesy: Blog on Towards DataScience at\n\u003chttps://towardsdatascience.com/selu-make-fnns-great-again-snn-8d61526802a9\u003e)\n\n#### Example Usage:\n\n n_classes = 10 #10_class problem\n from tensorflow.python.keras.layers import Dense\n model = tf.keras.Sequential()\n model.add(Dense(64, kernel_initializer='lecun_normal',\n activation='selu', input_shape=(28, 28, 1)))\n model.add(Dense(32, kernel_initializer='lecun_normal',\n activation='selu'))\n model.add(Dense(16, kernel_initializer='lecun_normal',\n activation='selu'))\n model.add(Dense(n_classes, activation='softmax'))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|-----|--------------------------------------------------------------|\n| `x` | A tensor or variable to compute the activation function for. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The scaled exponential unit activation: `scale * elu(x, alpha)`. ||\n\n\u003cbr /\u003e\n\nNote\n====\n\n - To be used together with the initialization \"[lecun_normal]\n (https://www.tensorflow.org/api_docs/python/tf/keras/initializers/lecun_normal)\".\n - To be used together with the dropout variant \"[AlphaDropout]\n (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout)\".\n\n#### References:\n\n[Self-Normalizing Neural Networks (Klambauer et al, 2017)](https://arxiv.org/abs/1706.02515)"]]