tf.keras.activations.relu
Stay organized with collections
Save and categorize content based on your preferences.
Applies the rectified linear unit activation function.
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.activations.relu`
tf.keras.activations.relu(
x, alpha=0.0, max_value=None, threshold=0.0
)
With default values, this returns the standard ReLU activation:
max(x, 0)
, the element-wise maximum of 0 and the input tensor.
Modifying default parameters allows you to use non-zero thresholds,
change the max value of the activation,
and to use a non-zero multiple of the input for values below the threshold.
Example:
foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32)
tf.keras.activations.relu(foo).numpy()
array([ 0., 0., 0., 5., 10.], dtype=float32)
tf.keras.activations.relu(foo, alpha=0.5).numpy()
array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32)
tf.keras.activations.relu(foo, max_value=5.).numpy()
array([0., 0., 0., 5., 5.], dtype=float32)
tf.keras.activations.relu(foo, threshold=5.).numpy()
array([-0., -0., 0., 0., 10.], dtype=float32)
Args |
x
|
Input tensor or variable .
|
alpha
|
A float that governs the slope for values lower than the
threshold.
|
max_value
|
A float that sets the saturation threshold (the largest
value the function will return).
|
threshold
|
A float giving the threshold value of the activation
function below which values will be damped or set to zero.
|
Returns |
A Tensor representing the input tensor,
transformed by the relu activation function.
Tensor will be of the same shape and dtype of input x .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.activations.relu\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.13.1/keras/activations.py#L283-L323) |\n\nApplies the rectified linear unit activation function.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.activations.relu\\`\n\n\u003cbr /\u003e\n\n tf.keras.activations.relu(\n x, alpha=0.0, max_value=None, threshold=0.0\n )\n\nWith default values, this returns the standard ReLU activation:\n`max(x, 0)`, the element-wise maximum of 0 and the input tensor.\n\nModifying default parameters allows you to use non-zero thresholds,\nchange the max value of the activation,\nand to use a non-zero multiple of the input for values below the threshold.\n\n#### Example:\n\n foo = tf.constant([-10, -5, 0.0, 5, 10], dtype = tf.float32)\n tf.keras.activations.relu(foo).numpy()\n array([ 0., 0., 0., 5., 10.], dtype=float32)\n tf.keras.activations.relu(foo, alpha=0.5).numpy()\n array([-5. , -2.5, 0. , 5. , 10. ], dtype=float32)\n tf.keras.activations.relu(foo, max_value=5.).numpy()\n array([0., 0., 0., 5., 5.], dtype=float32)\n tf.keras.activations.relu(foo, threshold=5.).numpy()\n array([-0., -0., 0., 0., 10.], dtype=float32)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|-------------------------------------------------------------------------------------------------------------------|\n| `x` | Input `tensor` or `variable`. |\n| `alpha` | A `float` that governs the slope for values lower than the threshold. |\n| `max_value` | A `float` that sets the saturation threshold (the largest value the function will return). |\n| `threshold` | A `float` giving the threshold value of the activation function below which values will be damped or set to zero. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` representing the input tensor, transformed by the relu activation function. Tensor will be of the same shape and dtype of input `x`. ||\n\n\u003cbr /\u003e"]]