tf.contrib.timeseries.CSVReader
Stay organized with collections
Save and categorize content based on your preferences.
Reads from a collection of CSV-formatted files.
tf.contrib.timeseries.CSVReader(
filenames, column_names=(feature_keys.TrainEvalFeatures.TIMES,
feature_keys.TrainEvalFeatures.VALUES), column_dtypes=None,
skip_header_lines=None, read_num_records_hint=4096
)
Args |
filenames
|
A filename or list of filenames to read the time series from.
Each line must have columns corresponding to column_names .
|
column_names
|
A list indicating names for each feature.
TrainEvalFeatures.TIMES and TrainEvalFeatures.VALUES are required;
VALUES may be repeated to indicate a multivariate series.
|
column_dtypes
|
If provided, must be a list with the same length as
column_names , indicating dtypes for each column. Defaults to
tf.int64 for TrainEvalFeatures.TIMES and tf.float32 for everything
else.
|
skip_header_lines
|
Passed on to tf.compat.v1.TextLineReader ; skips this
number of lines at the beginning of each file.
|
read_num_records_hint
|
When not reading a full dataset, indicates the
number of records to parse/transfer in a single chunk (for efficiency).
The actual number transferred at one time may be more or less.
|
Raises |
ValueError
|
If required column names are not specified, or if lengths do
not match.
|
Methods
check_dataset_size
View source
check_dataset_size(
minimum_dataset_size
)
When possible, raises an error if the dataset is too small.
This method allows TimeSeriesReaders to raise informative error messages if
the user has selected a window size in their TimeSeriesInputFn which is
larger than the dataset size. However, many TimeSeriesReaders will not have
access to a dataset size, in which case they do not need to override this
method.
Args |
minimum_dataset_size
|
The minimum number of records which should be
contained in the dataset. Readers should attempt to raise an error when
possible if an epoch of data contains fewer records.
|
read
View source
read()
Reads a chunk of data from the tf.compat.v1.ReaderBase
for later re-chunking.
read_full
View source
read_full()
Reads a full epoch of data into memory.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.timeseries.CSVReader\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/timeseries/python/timeseries/input_pipeline.py#L436-L513) |\n\nReads from a collection of CSV-formatted files. \n\n tf.contrib.timeseries.CSVReader(\n filenames, column_names=(feature_keys.TrainEvalFeatures.TIMES,\n feature_keys.TrainEvalFeatures.VALUES), column_dtypes=None,\n skip_header_lines=None, read_num_records_hint=4096\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `filenames` | A filename or list of filenames to read the time series from. Each line must have columns corresponding to `column_names`. |\n| `column_names` | A list indicating names for each feature. `TrainEvalFeatures.TIMES` and `TrainEvalFeatures.VALUES` are required; `VALUES` may be repeated to indicate a multivariate series. |\n| `column_dtypes` | If provided, must be a list with the same length as `column_names`, indicating dtypes for each column. Defaults to [`tf.int64`](../../../tf#int64) for `TrainEvalFeatures.TIMES` and [`tf.float32`](../../../tf#float32) for everything else. |\n| `skip_header_lines` | Passed on to [`tf.compat.v1.TextLineReader`](../../../tf/TextLineReader); skips this number of lines at the beginning of each file. |\n| `read_num_records_hint` | When not reading a full dataset, indicates the number of records to parse/transfer in a single chunk (for efficiency). The actual number transferred at one time may be more or less. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------|\n| `ValueError` | If required column names are not specified, or if lengths do not match. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `check_dataset_size`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/timeseries/python/timeseries/input_pipeline.py#L169-L183) \n\n check_dataset_size(\n minimum_dataset_size\n )\n\nWhen possible, raises an error if the dataset is too small.\n\nThis method allows TimeSeriesReaders to raise informative error messages if\nthe user has selected a window size in their TimeSeriesInputFn which is\nlarger than the dataset size. However, many TimeSeriesReaders will not have\naccess to a dataset size, in which case they do not need to override this\nmethod.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `minimum_dataset_size` | The minimum number of records which should be contained in the dataset. Readers should attempt to raise an error when possible if an epoch of data contains fewer records. |\n\n\u003cbr /\u003e\n\n### `read`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/timeseries/python/timeseries/input_pipeline.py#L368-L383) \n\n read()\n\nReads a chunk of data from the [`tf.compat.v1.ReaderBase`](../../../tf/ReaderBase) for later re-chunking.\n\n### `read_full`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/timeseries/python/timeseries/input_pipeline.py#L385-L433) \n\n read_full()\n\nReads a full epoch of data into memory."]]