bridge

  • Description:

WidowX interacting with toy kitchens

Split Examples
'test' 3,475
'train' 25,460
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': FeaturesDict({
            'open_gripper': bool,
            'rotation_delta': Tensor(shape=(3,), dtype=float32),
            'terminate_episode': float32,
            'world_vector': Tensor(shape=(3,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'image': Image(shape=(480, 640, 3), dtype=uint8),
            'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
            'natural_language_instruction': string,
            'state': Tensor(shape=(7,), dtype=float32),
        }),
        'reward': Scalar(shape=(), dtype=float32),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action FeaturesDict
steps/action/open_gripper Tensor bool
steps/action/rotation_delta Tensor (3,) float32
steps/action/terminate_episode Tensor float32
steps/action/world_vector Tensor (3,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/image Image (480, 640, 3) uint8
steps/observation/natural_language_embedding Tensor (512,) float32
steps/observation/natural_language_instruction Tensor string
steps/observation/state Tensor (7,) float32
steps/reward Scalar float32
  • Citation:
@inproceedings{walke2023bridgedata,
    title={BridgeData V2: A Dataset for Robot Learning at Scale},
    author={Walke, Homer and Black, Kevin and Lee, Abraham and Kim, Moo Jin and Du, Max and Zheng, Chongyi and Zhao, Tony and Hansen-Estruch, Philippe and Vuong, Quan and He, Andre and Myers, Vivek and Fang, Kuan and Finn, Chelsea and Levine, Sergey},
    booktitle={Conference on Robot Learning (CoRL)},
    year={2023}
}