tf.data.Options(), dataset options. Those options are added to
the default values defined in tfrecord_reader.py.
Note that when shuffle_files is True and no seed is defined,
experimental_deterministic will be set to False internally,
unless it is defined here.
If True (default) and the dataset satisfy the right
conditions (dataset small enough, files not shuffled,...) the dataset
will be cached during the first iteration (through ds = ds.cache()).
Each workers will always read the same subset of files. shuffle_files
only shuffle files within each worker.
If info.splits[split].num_shards < input_context.num_input_pipelines,
an error will be raised, as some workers would be empty.
Function with signature
List[FileDict] -> List[FileDict], which takes the list of
dict(file: str, take: int, skip: int) and returns the modified version
to read. This can be used to sort/shuffle the shards to read in
a custom order, instead of relying on shuffle_files=True.