berkeley_rpt_converted_externally_to_rlds
Stay organized with collections
Save and categorize content based on your preferences.
Franka performing tabletop pick place tasks
Split |
Examples |
'train' |
908 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(8,), dtype=float32, description=Robot action, consists of [7 delta joint pos,1x gripper binary state].),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'gripper': Scalar(shape=(), dtype=bool, description=Binary gripper state (1 - closed, 0 - open)),
'hand_image': Image(shape=(480, 640, 3), dtype=uint8, description=Hand camera RGB observation.),
'joint_pos': Tensor(shape=(7,), dtype=float32, description=xArm joint positions (7 DoF).),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(8,) |
float32 |
Robot action, consists of [7 delta joint pos,1x gripper binary state]. |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/gripper |
Scalar |
|
bool |
Binary gripper state (1 - closed, 0 - open) |
steps/observation/hand_image |
Image |
(480, 640, 3) |
uint8 |
Hand camera RGB observation. |
steps/observation/joint_pos |
Tensor |
(7,) |
float32 |
xArm joint positions (7 DoF). |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{Radosavovic2023,
title={Robot Learning with Sensorimotor Pre-training},
author={Ilija Radosavovic and Baifeng Shi and Letian Fu and Ken Goldberg and Trevor Darrell and Jitendra Malik},
year={2023},
journal={arXiv:2306.10007}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-03 UTC.
[null,null,["Last updated 2024-09-03 UTC."],[],[],null,["# berkeley_rpt_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nFranka performing tabletop pick place tasks\n\n- **Homepage** :\n \u003chttps://arxiv.org/abs/2306.10007\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.BerkeleyRptConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `40.64 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 908 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(8,), dtype=float32, description=Robot action, consists of [7 delta joint pos,1x gripper binary state].),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'gripper': Scalar(shape=(), dtype=bool, description=Binary gripper state (1 - closed, 0 - open)),\n 'hand_image': Image(shape=(480, 640, 3), dtype=uint8, description=Hand camera RGB observation.),\n 'joint_pos': Tensor(shape=(7,), dtype=float32, description=xArm joint positions (7 DoF).),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|------------------------------|--------------|---------------|---------|--------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (8,) | float32 | Robot action, consists of \\[7 delta joint pos,1x gripper binary state\\]. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/gripper | Scalar | | bool | Binary gripper state (1 - closed, 0 - open) |\n| steps/observation/hand_image | Image | (480, 640, 3) | uint8 | Hand camera RGB observation. |\n| steps/observation/joint_pos | Tensor | (7,) | float32 | xArm joint positions (7 DoF). |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{Radosavovic2023,\n title={Robot Learning with Sensorimotor Pre-training},\n author={Ilija Radosavovic and Baifeng Shi and Letian Fu and Ken Goldberg and Trevor Darrell and Jitendra Malik},\n year={2023},\n journal={arXiv:2306.10007}\n }"]]