tf.contrib.seq2seq.dynamic_decode
Stay organized with collections
Save and categorize content based on your preferences.
Perform dynamic decoding with decoder
.
tf.contrib.seq2seq.dynamic_decode(
decoder, output_time_major=False, impute_finished=False,
maximum_iterations=None, parallel_iterations=32, swap_memory=False, scope=None,
**kwargs
)
Calls initialize() once and step() repeatedly on the Decoder object.
Args |
decoder
|
A Decoder instance.
|
output_time_major
|
Python boolean. Default: False (batch major). If
True , outputs are returned as time major tensors (this mode is faster).
Otherwise, outputs are returned as batch major tensors (this adds extra
time to the computation).
|
impute_finished
|
Python boolean. If True , then states for batch
entries which are marked as finished get copied through and the
corresponding outputs get zeroed out. This causes some slowdown at
each time step, but ensures that the final state and outputs have
the correct values and that backprop ignores time steps that were
marked as finished.
|
maximum_iterations
|
int32 scalar, maximum allowed number of decoding
steps. Default is None (decode until the decoder is fully done).
|
parallel_iterations
|
Argument passed to tf.while_loop .
|
swap_memory
|
Argument passed to tf.while_loop .
|
scope
|
Optional variable scope to use.
|
**kwargs
|
dict, other keyword arguments for dynamic_decode. It might contain
arguments for BaseDecoder to initialize, which takes all tensor inputs
during call().
|
Returns |
(final_outputs, final_state, final_sequence_lengths) .
|
Raises |
TypeError
|
if decoder is not an instance of Decoder .
|
ValueError
|
if maximum_iterations is provided but is not a scalar.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.seq2seq.dynamic_decode\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/seq2seq/python/ops/decoder.py#L282-L486) |\n\nPerform dynamic decoding with `decoder`. \n\n tf.contrib.seq2seq.dynamic_decode(\n decoder, output_time_major=False, impute_finished=False,\n maximum_iterations=None, parallel_iterations=32, swap_memory=False, scope=None,\n **kwargs\n )\n\nCalls initialize() once and step() repeatedly on the Decoder object.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `decoder` | A `Decoder` instance. |\n| `output_time_major` | Python boolean. Default: `False` (batch major). If `True`, outputs are returned as time major tensors (this mode is faster). Otherwise, outputs are returned as batch major tensors (this adds extra time to the computation). |\n| `impute_finished` | Python boolean. If `True`, then states for batch entries which are marked as finished get copied through and the corresponding outputs get zeroed out. This causes some slowdown at each time step, but ensures that the final state and outputs have the correct values and that backprop ignores time steps that were marked as finished. |\n| `maximum_iterations` | `int32` scalar, maximum allowed number of decoding steps. Default is `None` (decode until the decoder is fully done). |\n| `parallel_iterations` | Argument passed to [`tf.while_loop`](../../../tf/while_loop). |\n| `swap_memory` | Argument passed to [`tf.while_loop`](../../../tf/while_loop). |\n| `scope` | Optional variable scope to use. |\n| `**kwargs` | dict, other keyword arguments for dynamic_decode. It might contain arguments for `BaseDecoder` to initialize, which takes all tensor inputs during call(). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| `(final_outputs, final_state, final_sequence_lengths)`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------|\n| `TypeError` | if `decoder` is not an instance of `Decoder`. |\n| `ValueError` | if `maximum_iterations` is provided but is not a scalar. |\n\n\u003cbr /\u003e"]]