View source on GitHub |
Reverb trajectory sequence observer.
Inherits From: ReverbAddTrajectoryObserver
tf_agents.replay_buffers.reverb_utils.ReverbTrajectorySequenceObserver(
py_client: tf_agents.typing.types.ReverbClient
,
table_name: Union[Text, Sequence[Text]],
sequence_length: int,
stride_length: int = 1,
priority: Union[float, int] = 1,
pad_end_of_episodes: bool = False,
tile_end_of_episodes: bool = False
)
This is equivalent to ReverbAddTrajectoryObserver but sequences are not cut when a boundary trajectory is seen. This allows for sequences to be sampled with boundaries anywhere in the sequence rather than just at the end.
Consider using this observer when you want to create training experience that can encompass any subsequence of the observed trajectories.
Raises | |
---|---|
ValueError
|
If tile_end_of_episodes is set without
pad_end_of_episodes .
|
Attributes | |
---|---|
py_client
|
Methods
close
close() -> None
Closes the writer of the observer.
flush
flush()
Ensures that items are pushed to the service.
get_table_signature
get_table_signature()
open
open() -> None
Open the writer of the observer.
reset
reset(
write_cached_steps: bool = True
) -> None
Resets the state of the observer.
Args | |
---|---|
write_cached_steps
|
boolean flag indicating whether we want to write the cached trajectory. When this argument is True, the function attempts to write the cached data before resetting (optionally with padding). Otherwise, the cached data gets dropped. |
__call__
__call__(
trajectory: tf_agents.trajectories.Trajectory
) -> None
Writes the trajectory into the underlying replay buffer.
Allows trajectory to be a flattened trajectory. No batch dimension allowed.
Args | |
---|---|
trajectory
|
The trajectory to be written which could be (possibly nested) trajectory object or a flattened version of a trajectory. It assumes there is no batch dimension. |