tf.contrib.training.NextQueuedSequenceBatch
Stay organized with collections
Save and categorize content based on your preferences.
NextQueuedSequenceBatch stores deferred SequenceQueueingStateSaver data.
tf.contrib.training.NextQueuedSequenceBatch(
state_saver
)
This class is instantiated by SequenceQueueingStateSaver
and is accessible
via its next_batch
property.
Attributes |
batch_size
|
The batch_size of the given batch.
Usually, this is the batch_size requested when initializing the SQSS, but
if allow_small_batch=True this will become smaller when inputs are
exhausted.
|
context
|
A dict mapping keys of input_context to batched context.
|
insertion_index
|
The insertion indices of the examples (when they were first added).
These indices start with the value -2**63 and increase with every
call to the prefetch op. Each whole example gets its own insertion
index, and this is used to prioritize the example so that its truncated
segments appear in adjacent iterations, even if new examples are inserted
by the prefetch op between iterations.
|
key
|
The key names of the given truncated unrolled examples.
The format of the key is:
"%05d_of_%05d:%s" % (sequence, sequence_count, original_key)
where original_key is the unique key read in by the prefetcher.
|
length
|
The lengths of the given truncated unrolled examples.
For initial iterations, for which sequence * num_unroll < length ,
this number is num_unroll . For the remainder,
this number is between 0 and num_unroll .
|
next_key
|
The key names of the next (in iteration) truncated unrolled examples.
The format of the key is:
"%05d_of_%05d:%s" % (sequence + 1, sequence_count, original_key)
if sequence + 1 < sequence_count , otherwise:
"STOP:%s" % original_key
where original_key is the unique key read in by the prefetcher.
|
sequence
|
An int32 vector, length batch_size : the sequence index of each entry.
When an input is split up, the sequence values
0, 1, ..., sequence_count - 1
are assigned to each split.
|
sequence_count
|
An int32 vector, length batch_size : the sequence count of each entry.
When an input is split up, the number of splits is equal to:
padded_length / num_unroll . This is the sequence_count.
|
sequences
|
A dict mapping keys of input_sequences to split and rebatched data.
|
total_length
|
The lengths of the original (non-truncated) unrolled examples.
|
Methods
save_state
View source
save_state(
state_name, value, name=None
)
Returns an op to save the current batch of state state_name
.
Args |
state_name
|
string, matches a key provided in initial_states .
|
value
|
A Tensor .
Its type must match that of initial_states[state_name].dtype .
If we had at input:
initial_states[state_name].get_shape() == [d1, d2, ...]
then the shape of value must match:
tf.shape(value) == [batch_size, d1, d2, ...]
|
name
|
string (optional). The name scope for newly created ops.
|
Returns |
A control flow op that stores the new state of each entry into
the state saver. This op must be run for every iteration that
accesses data from the state saver (otherwise the state saver
will never progress through its states and run out of capacity).
|
Raises |
KeyError
|
if state_name does not match any of the initial states
declared in initial_states .
|
state
View source
state(
state_name
)
Returns batched state tensors.
Args |
state_name
|
string, matches a key provided in initial_states .
|
Returns |
A Tensor : a batched set of states, either initial states (if this is
the first run of the given example), or a value as stored during
a previous iteration via save_state control flow.
Its type is the same as initial_states["state_name"].dtype .
If we had at input:
initial_states[state_name].get_shape() == [d1, d2, ...],
then
state(state_name).get_shape() == [batch_size, d1, d2, ...]
|
Raises |
KeyError
|
if state_name does not match any of the initial states
declared in initial_states .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.training.NextQueuedSequenceBatch\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py#L358-L611) |\n\nNextQueuedSequenceBatch stores deferred SequenceQueueingStateSaver data. \n\n tf.contrib.training.NextQueuedSequenceBatch(\n state_saver\n )\n\nThis class is instantiated by `SequenceQueueingStateSaver` and is accessible\nvia its `next_batch` property.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `batch_size` | The batch_size of the given batch. \u003cbr /\u003e Usually, this is the batch_size requested when initializing the SQSS, but if allow_small_batch=True this will become smaller when inputs are exhausted. |\n| `context` | A dict mapping keys of `input_context` to batched context. |\n| `insertion_index` | The insertion indices of the examples (when they were first added). \u003cbr /\u003e These indices start with the value -2\\*\\*63 and increase with every call to the prefetch op. Each whole example gets its own insertion index, and this is used to prioritize the example so that its truncated segments appear in adjacent iterations, even if new examples are inserted by the prefetch op between iterations. |\n| `key` | The key names of the given truncated unrolled examples. \u003cbr /\u003e The format of the key is: \"%05d_of_%05d:%s\" % (sequence, sequence_count, original_key) where `original_key` is the unique key read in by the prefetcher. |\n| `length` | The lengths of the given truncated unrolled examples. \u003cbr /\u003e For initial iterations, for which `sequence * num_unroll \u003c length`, this number is `num_unroll`. For the remainder, this number is between `0` and `num_unroll`. |\n| `next_key` | The key names of the next (in iteration) truncated unrolled examples. \u003cbr /\u003e The format of the key is: \"%05d_of_%05d:%s\" % (sequence + 1, sequence_count, original_key) if `sequence + 1 \u003c sequence_count`, otherwise: \"STOP:%s\" % original_key where `original_key` is the unique key read in by the prefetcher. |\n| `sequence` | An int32 vector, length `batch_size`: the sequence index of each entry. \u003cbr /\u003e When an input is split up, the sequence values 0, 1, ..., sequence_count - 1 are assigned to each split. |\n| `sequence_count` | An int32 vector, length `batch_size`: the sequence count of each entry. \u003cbr /\u003e When an input is split up, the number of splits is equal to: `padded_length / num_unroll`. This is the sequence_count. |\n| `sequences` | A dict mapping keys of `input_sequences` to split and rebatched data. |\n| `total_length` | The lengths of the original (non-truncated) unrolled examples. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `save_state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py#L556-L611) \n\n save_state(\n state_name, value, name=None\n )\n\nReturns an op to save the current batch of state `state_name`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `state_name` | string, matches a key provided in `initial_states`. |\n| `value` | A `Tensor`. Its type must match that of `initial_states[state_name].dtype`. If we had at input: \u003cbr /\u003e initial_states[state_name].get_shape() == [d1, d2, ...] then the shape of `value` must match: tf.shape(value) == [batch_size, d1, d2, ...] \u003cbr /\u003e |\n| `name` | string (optional). The name scope for newly created ops. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A control flow op that stores the new state of each entry into the state saver. This op must be run for every iteration that accesses data from the state saver (otherwise the state saver will never progress through its states and run out of capacity). ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|------------|----------------------------------------------------------------------------------------|\n| `KeyError` | if `state_name` does not match any of the initial states declared in `initial_states`. |\n\n\u003cbr /\u003e\n\n### `state`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/training/python/training/sequence_queueing_state_saver.py#L527-L554) \n\n state(\n state_name\n )\n\nReturns batched state tensors.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|--------------|-----------------------------------------------------|\n| `state_name` | string, matches a key provided in `initial_states`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A `Tensor`: a batched set of states, either initial states (if this is the first run of the given example), or a value as stored during a previous iteration via `save_state` control flow. Its type is the same as `initial_states[\"state_name\"].dtype`. If we had at input: \u003cbr /\u003e initial_states[state_name].get_shape() == [d1, d2, ...], then state(state_name).get_shape() == [batch_size, d1, d2, ...] \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|------------|----------------------------------------------------------------------------------------|\n| `KeyError` | if `state_name` does not match any of the initial states declared in `initial_states`. |\n\n\u003cbr /\u003e"]]