Processes the experience and prepares it for the network of the agent.

First the reward, the action, and the observation are flattened to have only one batch dimension. Then the action mask is removed if it is there. Finally, if the experience includes chosen action features in the policy info, it gets copied in place of the per-arm observation.

experience The experience coming from the replay buffer.
observation_and_action_constraint_splitter If the agent accepts action masks, this function splits the mask from the observation.
accepts_per_arm_features Whether the agent accepts per-arm features.
training_data_spec The data spec describing what the agent expects.

A tuple of (reward, action, observation) tensors to be consumed by the train function of the neural agent.