View source on GitHub |
Randomized leaky rectified liner unit function.
tfa.activations.rrelu(
x: tfa.types.TensorLike
,
lower: tfa.types.Number
= 0.125,
upper: tfa.types.Number
= 0.3333333333333333,
training: Optional[bool] = None,
seed: Optional[int] = None,
rng: Optional[tf.random.Generator] = None
) -> tf.Tensor
Computes rrelu function:
\[ \mathrm{rrelu}(x) = \begin{cases} x & \text{if } x > 0 \\ a x \end{cases}, \]
where
\[ a \sim \mathcal{U}(\mathrm{lower}, \mathrm{upper}) \]
when training
is True
; or
\[ a = \frac{\mathrm{lower} + \mathrm{upper} }{2} \]
when training
is False
.
See Empirical Evaluation of Rectified Activations in Convolutional Network.
Usage:
x = tf.constant([-1.0, 0.0, 1.0])
tfa.activations.rrelu(x, training=False)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.22916667, 0. , 1. ], dtype=float32)>
tfa.activations.rrelu(x, training=True, seed=2020)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.22631127, 0. , 1. ], dtype=float32)>
generator = tf.random.Generator.from_seed(2021)
tfa.activations.rrelu(x, training=True, rng=generator)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.16031083, 0. , 1. ], dtype=float32)>
Args | |
---|---|
x
|
A Tensor . Must be one of the following types:
bfloat16 , float16 , float32 , float64 .
|
lower
|
float , lower bound for random alpha.
|
upper
|
float , upper bound for random alpha.
|
training
|
bool , indicating whether the call
is meant for training or inference.
|
seed
|
int , this sets the operation-level seed.
|
rng
|
A tf.random.Generator .
|
Returns | |
---|---|
result
|
A Tensor . Has the same type as x .
|