View source on GitHub |
Gets a 3D self-attention mask.
tfm.nlp.layers.get_mask(
inputs: tf.Tensor, to_mask: tf.Tensor, dtype: Optional[tf.DType] = None
) -> tf.Tensor
Args | |
---|---|
inputs
|
from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...]. |
to_mask
|
int32 Tensor of shape [batch_size, to_seq_length]. |
dtype
|
the output Tensor dtype. |
Returns | |
---|---|
float Tensor of shape [batch_size, from_seq_length, to_seq_length]. |