tf.keras.preprocessing.sequence.TimeseriesGenerator
Stay organized with collections
Save and categorize content based on your preferences.
Utility class for generating batches of temporal data.
Inherits From: Sequence
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator`
tf.keras.preprocessing.sequence.TimeseriesGenerator(
data,
targets,
length,
sampling_rate=1,
stride=1,
start_index=0,
end_index=None,
shuffle=False,
reverse=False,
batch_size=128
)
This class takes in a sequence of data-points gathered at
equal intervals, along with time series parameters such as
stride, length of history, etc., to produce batches for
training/validation.
Arguments |
data
|
Indexable generator (such as list or Numpy array)
containing consecutive data points (timesteps).
The data should be at 2D, and axis 0 is expected
to be the time dimension.
|
targets
|
Targets corresponding to timesteps in data .
It should have same length as data .
|
length
|
Length of the output sequences (in number of timesteps).
|
sampling_rate
|
Period between successive individual timesteps
within sequences. For rate r , timesteps
data[i] , data[i-r] , ... data[i - length]
are used for create a sample sequence.
|
stride
|
Period between successive output sequences.
For stride s , consecutive output samples would
be centered around data[i] , data[i+s] , data[i+2*s] , etc.
|
start_index
|
Data points earlier than start_index will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
|
end_index
|
Data points later than end_index will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
|
shuffle
|
Whether to shuffle output samples,
or instead draw them in chronological order.
|
reverse
|
Boolean: if true , timesteps in each output sample will be
in reverse chronological order.
|
batch_size
|
Number of timeseries samples in each batch
(except maybe the last one).
|
Examples |
from keras.preprocessing.sequence import TimeseriesGenerator
import numpy as np
data = np.array([[i] for i in range(50)])
targets = np.array([[i] for i in range(50)])
data_gen = TimeseriesGenerator(data, targets,
length=10, sampling_rate=2,
batch_size=2)
assert len(data_gen) == 20
batch_0 = data_gen[0]
x, y = batch_0
assert np.array_equal(x,
np.array([[[0], [2], [4], [6], [8]],
[[1], [3], [5], [7], [9]]]))
assert np.array_equal(y,
np.array([[10], [11]]))
|
Methods
get_config
View source
get_config()
Returns the TimeseriesGenerator configuration as Python dictionary.
Returns |
A Python dictionary with the TimeseriesGenerator configuration.
|
on_epoch_end
View source
on_epoch_end()
Method called at the end of every epoch.
to_json
View source
to_json(
**kwargs
)
Returns a JSON string containing the timeseries generator configuration.
Args |
**kwargs
|
Additional keyword arguments
to be passed to json.dumps() .
|
Returns |
A JSON string containing the tokenizer configuration.
|
__getitem__
View source
__getitem__(
index
)
Gets batch at position index
.
Args |
index
|
position of the batch in the Sequence.
|
__iter__
View source
__iter__()
Create a generator that iterate over the Sequence.
__len__
View source
__len__()
Number of batch in the Sequence.
Returns |
The number of batches in the Sequence.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.preprocessing.sequence.TimeseriesGenerator\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.11.0/keras/preprocessing/sequence.py#L55-L244) |\n\nUtility class for generating batches of temporal data.\n\nInherits From: [`Sequence`](../../../../tf/keras/utils/Sequence)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.preprocessing.sequence.TimeseriesGenerator\\`\n\n\u003cbr /\u003e\n\n tf.keras.preprocessing.sequence.TimeseriesGenerator(\n data,\n targets,\n length,\n sampling_rate=1,\n stride=1,\n start_index=0,\n end_index=None,\n shuffle=False,\n reverse=False,\n batch_size=128\n )\n\n| **Deprecated:** [`tf.keras.preprocessing.sequence.TimeseriesGenerator`](../../../../tf/keras/preprocessing/sequence/TimeseriesGenerator) does not operate on tensors and is not recommended for new code. Prefer using a [`tf.data.Dataset`](../../../../tf/data/Dataset) which provides a more efficient and flexible mechanism for batching, shuffling, and windowing input. See the [tf.data guide](https://www.tensorflow.org/guide/data) for more details.\n\nThis class takes in a sequence of data-points gathered at\nequal intervals, along with time series parameters such as\nstride, length of history, etc., to produce batches for\ntraining/validation.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|-----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data` | Indexable generator (such as list or Numpy array) containing consecutive data points (timesteps). The data should be at 2D, and axis 0 is expected to be the time dimension. |\n| `targets` | Targets corresponding to timesteps in `data`. It should have same length as `data`. |\n| `length` | Length of the output sequences (in number of timesteps). |\n| `sampling_rate` | Period between successive individual timesteps within sequences. For rate `r`, timesteps `data[i]`, `data[i-r]`, ... `data[i - length]` are used for create a sample sequence. |\n| `stride` | Period between successive output sequences. For stride `s`, consecutive output samples would be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc. |\n| `start_index` | Data points earlier than `start_index` will not be used in the output sequences. This is useful to reserve part of the data for test or validation. |\n| `end_index` | Data points later than `end_index` will not be used in the output sequences. This is useful to reserve part of the data for test or validation. |\n| `shuffle` | Whether to shuffle output samples, or instead draw them in chronological order. |\n| `reverse` | Boolean: if `true`, timesteps in each output sample will be in reverse chronological order. |\n| `batch_size` | Number of timeseries samples in each batch (except maybe the last one). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A [Sequence](https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence) instance. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Examples -------- ||\n|---|---|\n| \u003cbr /\u003e from keras.preprocessing.sequence import TimeseriesGenerator import numpy as np data = np.array([[i] for i in range(50)]) targets = np.array([[i] for i in range(50)]) data_gen = TimeseriesGenerator(data, targets, length=10, sampling_rate=2, batch_size=2) assert len(data_gen) == 20 batch_0 = data_gen[0] x, y = batch_0 assert np.array_equal(x, np.array([[[0], [2], [4], [6], [8]], [[1], [3], [5], [7], [9]]])) assert np.array_equal(y, np.array([[10], [11]])) \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_config`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/preprocessing/sequence.py#L195-L228) \n\n get_config()\n\nReturns the TimeseriesGenerator configuration as Python dictionary.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A Python dictionary with the TimeseriesGenerator configuration. ||\n\n\u003cbr /\u003e\n\n### `on_epoch_end`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/utils/data_utils.py#L513-L515) \n\n on_epoch_end()\n\nMethod called at the end of every epoch.\n\n### `to_json`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/preprocessing/sequence.py#L230-L244) \n\n to_json(\n **kwargs\n )\n\nReturns a JSON string containing the timeseries generator configuration.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|--------------------------------------------------------------|\n| `**kwargs` | Additional keyword arguments to be passed to `json.dumps()`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A JSON string containing the tokenizer configuration. ||\n\n\u003cbr /\u003e\n\n### `__getitem__`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/preprocessing/sequence.py#L170-L193) \n\n __getitem__(\n index\n )\n\nGets batch at position `index`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------|----------------------------------------|\n| `index` | position of the batch in the Sequence. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A batch ||\n\n\u003cbr /\u003e\n\n### `__iter__`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/utils/data_utils.py#L517-L520) \n\n __iter__()\n\nCreate a generator that iterate over the Sequence.\n\n### `__len__`\n\n[View source](https://github.com/keras-team/keras/tree/v2.11.0/keras/preprocessing/sequence.py#L165-L168) \n\n __len__()\n\nNumber of batch in the Sequence.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| The number of batches in the Sequence. ||\n\n\u003cbr /\u003e"]]