robo_set
Stay organized with collections
Save and categorize content based on your preferences.
Real dataset of a single robot arm demonstrating 12 non-trivial manipulation
skills across 38 tasks, 7500 trajectories.
Split |
Examples |
'train' |
18,250 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': string,
'trial_id': string,
}),
'steps': Dataset({
'action': Tensor(shape=(8,), dtype=float32),
'discount': Scalar(shape=(), dtype=float32),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_instruction': string,
'observation': FeaturesDict({
'image_left': Image(shape=(240, 424, 3), dtype=uint8),
'image_right': Image(shape=(240, 424, 3), dtype=uint8),
'image_top': Image(shape=(240, 424, 3), dtype=uint8),
'image_wrist': Image(shape=(240, 424, 3), dtype=uint8),
'state': Tensor(shape=(8,), dtype=float32),
'state_velocity': Tensor(shape=(8,), dtype=float32),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Tensor |
|
string |
|
episode_metadata/trial_id |
Tensor |
|
string |
|
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(8,) |
float32 |
|
steps/discount |
Scalar |
|
float32 |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_instruction |
Tensor |
|
string |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image_left |
Image |
(240, 424, 3) |
uint8 |
|
steps/observation/image_right |
Image |
(240, 424, 3) |
uint8 |
|
steps/observation/image_top |
Image |
(240, 424, 3) |
uint8 |
|
steps/observation/image_wrist |
Image |
(240, 424, 3) |
uint8 |
|
steps/observation/state |
Tensor |
(8,) |
float32 |
|
steps/observation/state_velocity |
Tensor |
(8,) |
float32 |
|
steps/reward |
Scalar |
|
float32 |
|
@misc{bharadhwaj2023roboagent, title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking}, author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar}, year={2023}, eprint={2309.01918}, archivePrefix={arXiv}, primaryClass={cs.RO} }
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-11 UTC.
[null,null,["Last updated 2024-12-11 UTC."],[],[],null,["# robo_set\n\n\u003cbr /\u003e\n\n- **Description**:\n\nReal dataset of a single robot arm demonstrating 12 non-trivial manipulation\nskills across 38 tasks, 7500 trajectories.\n\n- **Homepage** : \u003chttps://robopen.github.io/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.RoboSet`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `179.42 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 18,250 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': string,\n 'trial_id': string,\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(8,), dtype=float32),\n 'discount': Scalar(shape=(), dtype=float32),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_instruction': string,\n 'observation': FeaturesDict({\n 'image_left': Image(shape=(240, 424, 3), dtype=uint8),\n 'image_right': Image(shape=(240, 424, 3), dtype=uint8),\n 'image_top': Image(shape=(240, 424, 3), dtype=uint8),\n 'image_wrist': Image(shape=(240, 424, 3), dtype=uint8),\n 'state': Tensor(shape=(8,), dtype=float32),\n 'state_velocity': Tensor(shape=(8,), dtype=float32),\n }),\n 'reward': Scalar(shape=(), dtype=float32),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------------------------------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Tensor | | string | |\n| episode_metadata/trial_id | Tensor | | string | |\n| steps | Dataset | | | |\n| steps/action | Tensor | (8,) | float32 | |\n| steps/discount | Scalar | | float32 | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_instruction | Tensor | | string | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image_left | Image | (240, 424, 3) | uint8 | |\n| steps/observation/image_right | Image | (240, 424, 3) | uint8 | |\n| steps/observation/image_top | Image | (240, 424, 3) | uint8 | |\n| steps/observation/image_wrist | Image | (240, 424, 3) | uint8 | |\n| steps/observation/state | Tensor | (8,) | float32 | |\n| steps/observation/state_velocity | Tensor | (8,) | float32 | |\n| steps/reward | Scalar | | float32 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @misc{bharadhwaj2023roboagent, title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking}, author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar}, year={2023}, eprint={2309.01918}, archivePrefix={arXiv}, primaryClass={cs.RO} }"]]