Warning: This project is deprecated. TensorFlow Addons has stopped development,
The project will only be providing minimal maintenance releases until May 2024. See the full
announcement here or on
github.
tfa.seq2seq.Decoder
Stay organized with collections
Save and categorize content based on your preferences.
An RNN Decoder abstract interface object.
Concepts used by this interface:
inputs
: (structure of) tensors and TensorArrays that is passed as input
to the RNN cell composing the decoder, at each time step.
state
: (structure of) tensors and TensorArrays that is passed to the
RNN cell instance as the state.
finished
: boolean tensor telling whether each sequence in the batch is
finished.
training
: boolean whether it should behave in training mode or in
inference mode.
outputs
: instance of tfa.seq2seq.BasicDecoderOutput
. Result of the decoding, at
each time step.
Attributes |
batch_size
|
The batch size of input values.
|
output_dtype
|
A (possibly nested tuple of...) dtype[s].
|
output_size
|
A (possibly nested tuple of...) integer[s] or TensorShape object[s].
|
tracks_own_finished
|
Describes whether the Decoder keeps track of finished states.
Most decoders will emit a true/false finished value independently
at each time step. In this case, the tfa.seq2seq.dynamic_decode function keeps
track of which batch entries are already finished, and performs a
logical OR to insert new batches to the finished set.
Some decoders, however, shuffle batches / beams between time steps and
tfa.seq2seq.dynamic_decode will mix up the finished state across these entries
because it does not track the reshuffle across time steps. In this
case, it is up to the decoder to declare that it will keep track of its
own finished state by setting this property to True .
|
Methods
finalize
View source
finalize(
outputs, final_state, sequence_lengths
)
initialize
View source
@abc.abstractmethod
initialize(
name=None
)
Called before any decoding iterations.
This methods must compute initial input values and initial state.
Args |
name
|
Name scope for any created operations.
|
Returns |
(finished, initial_inputs, initial_state) : initial values of
'finished' flags, inputs and state.
|
step
View source
@abc.abstractmethod
step(
time, inputs, state, training=None, name=None
)
Called per step of decoding (but only once for dynamic decoding).
Args |
time
|
Scalar int32 tensor. Current step number.
|
inputs
|
RNN cell input (possibly nested tuple of) tensor[s] for this
time step.
|
state
|
RNN cell state (possibly nested tuple of) tensor[s] from
previous time step.
|
training
|
Python boolean. Indicates whether the layer should behave
in training mode or in inference mode. Only relevant
when dropout or recurrent_dropout is used.
|
name
|
Name scope for any created operations.
|
Returns |
(outputs, next_state, next_inputs, finished) : outputs is an
object containing the decoder output, next_state is a (structure
of) state tensors and TensorArrays, next_inputs is the tensor that
should be used as input for the next step, finished is a boolean
tensor telling whether the sequence is complete, for each sequence in
the batch.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-25 UTC.
[null,null,["Last updated 2023-05-25 UTC."],[],[],null,["# tfa.seq2seq.Decoder\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/addons/blob/v0.20.0/tensorflow_addons/seq2seq/decoder.py#L28-L121) |\n\nAn RNN Decoder abstract interface object.\n\nConcepts used by this interface:\n\n- `inputs`: (structure of) tensors and TensorArrays that is passed as input to the RNN cell composing the decoder, at each time step.\n- `state`: (structure of) tensors and TensorArrays that is passed to the RNN cell instance as the state.\n- `finished`: boolean tensor telling whether each sequence in the batch is finished.\n- `training`: boolean whether it should behave in training mode or in inference mode.\n- `outputs`: instance of [`tfa.seq2seq.BasicDecoderOutput`](../../tfa/seq2seq/BasicDecoderOutput). Result of the decoding, at each time step.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `batch_size` | The batch size of input values. |\n| `output_dtype` | A (possibly nested tuple of...) dtype\\[s\\]. |\n| `output_size` | A (possibly nested tuple of...) integer\\[s\\] or `TensorShape` object\\[s\\]. |\n| `tracks_own_finished` | Describes whether the Decoder keeps track of finished states. \u003cbr /\u003e Most decoders will emit a true/false `finished` value independently at each time step. In this case, the [`tfa.seq2seq.dynamic_decode`](../../tfa/seq2seq/dynamic_decode) function keeps track of which batch entries are already finished, and performs a logical OR to insert new batches to the finished set. Some decoders, however, shuffle batches / beams between time steps and [`tfa.seq2seq.dynamic_decode`](../../tfa/seq2seq/dynamic_decode) will mix up the finished state across these entries because it does not track the reshuffle across time steps. In this case, it is up to the decoder to declare that it will keep track of its own finished state by setting this property to `True`. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `finalize`\n\n[View source](https://github.com/tensorflow/addons/blob/v0.20.0/tensorflow_addons/seq2seq/decoder.py#L100-L101) \n\n finalize(\n outputs, final_state, sequence_lengths\n )\n\n### `initialize`\n\n[View source](https://github.com/tensorflow/addons/blob/v0.20.0/tensorflow_addons/seq2seq/decoder.py#L60-L73) \n\n @abc.abstractmethod\n initialize(\n name=None\n )\n\nCalled before any decoding iterations.\n\nThis methods must compute initial input values and initial state.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------|----------------------------------------|\n| `name` | Name scope for any created operations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| `(finished, initial_inputs, initial_state)`: initial values of 'finished' flags, inputs and state. ||\n\n\u003cbr /\u003e\n\n### `step`\n\n[View source](https://github.com/tensorflow/addons/blob/v0.20.0/tensorflow_addons/seq2seq/decoder.py#L75-L98) \n\n @abc.abstractmethod\n step(\n time, inputs, state, training=None, name=None\n )\n\nCalled per step of decoding (but only once for dynamic decoding).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `time` | Scalar `int32` tensor. Current step number. |\n| `inputs` | RNN cell input (possibly nested tuple of) tensor\\[s\\] for this time step. |\n| `state` | RNN cell state (possibly nested tuple of) tensor\\[s\\] from previous time step. |\n| `training` | Python boolean. Indicates whether the layer should behave in training mode or in inference mode. Only relevant when `dropout` or `recurrent_dropout` is used. |\n| `name` | Name scope for any created operations. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| `(outputs, next_state, next_inputs, finished)`: `outputs` is an object containing the decoder output, `next_state` is a (structure of) state tensors and TensorArrays, `next_inputs` is the tensor that should be used as input for the next step, `finished` is a boolean tensor telling whether the sequence is complete, for each sequence in the batch. ||\n\n\u003cbr /\u003e"]]