View source on GitHub |
Gaussian Error Linear Unit.
tfa.activations.gelu(
x: tfa.types.TensorLike
,
approximate: bool = True
) -> tf.Tensor
Computes gaussian error linear:
\[ \mathrm{gelu}(x) = x \Phi(x), \]
where
\[ \Phi(x) = \frac{1}{2} \left[ 1 + \mathrm{erf}(\frac{x}{\sqrt{2} }) \right]$ \]
when approximate
is False
; or
\[ \Phi(x) = \frac{x}{2} \left[ 1 + \tanh(\sqrt{\frac{2}{\pi} } \cdot (x + 0.044715 \cdot x^3)) \right] \]
when approximate
is True
.
See Gaussian Error Linear Units (GELUs) and BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.
Consider using tf.nn.gelu
instead.
Note that the default of approximate
changed to False
in tf.nn.gelu
.
Usage:
x = tf.constant([-1.0, 0.0, 1.0])
tfa.activations.gelu(x, approximate=False)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.15865529, 0. , 0.8413447 ], dtype=float32)>
tfa.activations.gelu(x, approximate=True)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([-0.15880796, 0. , 0.841192 ], dtype=float32)>
Args | |
---|---|
x
|
A Tensor . Must be one of the following types:
float16 , float32 , float64 .
|
approximate
|
bool, whether to enable approximation. |
Returns | |
---|---|
A Tensor . Has the same type as x .
|