![]() |
Transformer layer.
Inherits From: TransformerEncoderBlock
tfm.nlp.layers.Transformer(
num_attention_heads,
intermediate_size,
intermediate_activation,
dropout_rate=0.0,
attention_dropout_rate=0.0,
output_range=None,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
use_bias=True,
norm_first=False,
norm_epsilon=1e-12,
intermediate_dropout=0.0,
attention_initializer=None,
**kwargs
)
This layer implements the Transformer from "Attention Is All You Need". (https://arxiv.org/abs/1706.03762).
Methods
call
call(
inputs: Any, output_range: Optional[tf.Tensor] = None
) -> Any
Transformer self-attention encoder block call.
Args | |
---|---|
inputs
|
a single tensor or a list of tensors. input tensor as the single
sequence of embeddings. [input tensor , attention mask ] to have the
additional attention mask. [query tensor , key value tensor ,
attention mask ] to have separate input streams for the query, and
key/value to the multi-head attention.
|
output_range
|
the sequence output range, [0, output_range) for slicing the
target sequence. None means the target sequence is not sliced. If you
would like to have no change to the model training, it is better to only
set the output_range for serving.
|
Returns | |
---|---|
An output tensor with the same dimensions as input/query tensor. |