![]() |
Creates a network layer that adds a sinusoidal positional encoding.
tfm.vision.layers.PositionalEncoding(
initializer: tf.keras.initializers.Initializer = 'zeros',
cache_encoding: bool = False,
state_prefix: Optional[str] = None,
**kwargs
)
Positional encoding is incremented across frames, and is added to the input. The positional encoding is first weighted at 0 so that the network can choose to ignore it. This implements:
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention Is All You Need. (https://arxiv.org/pdf/1706.03762.pdf).
Methods
call
call(
inputs: tf.Tensor,
states: Optional[States] = None,
output_states: bool = True
) -> Union[tf.Tensor, Tuple[tf.Tensor, States]]
Calls the layer with the given inputs.
Args | |
---|---|
inputs
|
An input tf.Tensor .
|
states
|
A dict of states such that, if any of the keys match for this
layer, will overwrite the contents of the buffer(s). Expected keys
include state_prefix + '_pos_enc_frame_count' .
|
output_states
|
A bool . If True, returns the output tensor and output
states. Returns just the output tensor otherwise.
|
Returns | |
---|---|
An output tf.Tensor (and optionally the states if output_states=True ).
|
Raises | |
---|---|
ValueError
|
If using 'channels_first' data format. |