Processes the experience and prepares it for the network of the agent.

First the reward, the action, and the observation are flattened to have only one batch dimension. Then, if the experience includes chosen action features in the policy info, it gets copied in place of the per-arm observation.

experience The experience coming from the replay buffer.
accepts_per_arm_features Whether the agent accepts per-arm features.
training_data_spec The data spec describing what the agent expects.

A tuple of (observation, action, reward) tensors to be consumed by the train function of the neural agent.