tf.contrib.data.shuffle_and_repeat
Stay organized with collections
Save and categorize content based on your preferences.
Shuffles and repeats a Dataset returning a new permutation for each epoch. (deprecated)
tf.contrib.data.shuffle_and_repeat(
buffer_size, count=None, seed=None
)
dataset.apply(tf.data.experimental.shuffle_and_repeat(buffer_size, count))
is equivalent to
dataset.shuffle(buffer_size, reshuffle_each_iteration=True).repeat(count)
The difference is that the latter dataset is not serializable. So,
if you need to checkpoint an input pipeline with reshuffling you must use
this implementation.
Args |
buffer_size
|
A tf.int64 scalar tf.Tensor , representing the
maximum number elements that will be buffered when prefetching.
|
count
|
(Optional.) A tf.int64 scalar tf.Tensor , representing the
number of times the dataset should be repeated. The default behavior
(if count is None or -1 ) is for the dataset be repeated
indefinitely.
|
seed
|
(Optional.) A tf.int64 scalar tf.Tensor , representing the
random seed that will be used to create the distribution. See
tf.compat.v1.set_random_seed for behavior.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.data.shuffle_and_repeat\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/data/python/ops/shuffle_ops.py#L24-L54) |\n\nShuffles and repeats a Dataset returning a new permutation for each epoch. (deprecated) \n\n tf.contrib.data.shuffle_and_repeat(\n buffer_size, count=None, seed=None\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use [`tf.data.experimental.shuffle_and_repeat(...)`](../../../tf/data/experimental/shuffle_and_repeat).\n\n`dataset.apply(tf.data.experimental.shuffle_and_repeat(buffer_size, count))`\n\nis equivalent to\n\n`dataset.shuffle(buffer_size, reshuffle_each_iteration=True).repeat(count)`\n\nThe difference is that the latter dataset is not serializable. So,\nif you need to checkpoint an input pipeline with reshuffling you must use\nthis implementation.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `buffer_size` | A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../../tf/Tensor), representing the maximum number elements that will be buffered when prefetching. |\n| `count` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../../tf/Tensor), representing the number of times the dataset should be repeated. The default behavior (if `count` is `None` or `-1`) is for the dataset be repeated indefinitely. |\n| `seed` | (Optional.) A [`tf.int64`](../../../tf#int64) scalar [`tf.Tensor`](../../../tf/Tensor), representing the random seed that will be used to create the distribution. See [`tf.compat.v1.set_random_seed`](../../../tf/random/set_random_seed) for behavior. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Dataset` transformation function, which can be passed to [`tf.data.Dataset.apply`](../../../tf/data/Dataset#apply). ||\n\n\u003cbr /\u003e"]]