|  View source on GitHub | 
Shuffles and repeats a Dataset, reshuffling with each repetition. (deprecated)
tf.data.experimental.shuffle_and_repeat(
    buffer_size, count=None, seed=None
)
d = tf.data.Dataset.from_tensor_slices([1, 2, 3])d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2))[elem.numpy() for elem in d] # doctest: +SKIP[2, 3, 1, 1, 3, 2]
dataset.apply(
  tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed))
produces the same output as
dataset.shuffle(
  buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count)
In each repetition, this dataset fills a buffer with buffer_size elements,
then randomly samples elements from this buffer, replacing the selected
elements with new elements. For perfect shuffling, set the buffer size equal
to the full size of the dataset.
For instance, if your dataset contains 10,000 elements but buffer_size is
set to 1,000, then shuffle will initially select a random element from
only the first 1,000 elements in the buffer. Once an element is selected,
its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1,000 element buffer.
| Args | |
|---|---|
| buffer_size | A tf.int64scalartf.Tensor, representing the maximum
number elements that will be buffered when prefetching. | 
| count | (Optional.) A tf.int64scalartf.Tensor, representing the number
of times the dataset should be repeated. The default behavior (ifcountisNoneor-1) is for the dataset be repeated indefinitely. | 
| seed | (Optional.) A tf.int64scalartf.Tensor, representing the random
seed that will be used to create the distribution. Seetf.random.set_seedfor behavior. | 
| Returns | |
|---|---|
| A Datasettransformation function, which can be passed totf.data.Dataset.apply. |