stanford_hydra_dataset_converted_externally_to_rlds
Stay organized with collections
Save and categorize content based on your preferences.
Franka solving long-horizon tasks
Split |
Examples |
'train' |
570 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(7,), dtype=float32, description=Robot action, consists of [3x EEF positional delta, 3x EEF orientation delta in euler angle, 1x close gripper].),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_dense': Scalar(shape=(), dtype=bool, description=True if state is a waypoint(010) or in dense mode(x111).),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'image': Image(shape=(240, 320, 3), dtype=uint8, description=Main camera RGB observation.),
'state': Tensor(shape=(27,), dtype=float32, description=Robot state, consists of [3x EEF position,4x EEF orientation in quaternion,3x EEF orientation in euler angle,7x robot joint angles, 7x robot joint velocities,3x gripper state.),
'wrist_image': Image(shape=(240, 320, 3), dtype=uint8, description=Wrist camera RGB observation.),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(7,) |
float32 |
Robot action, consists of [3x EEF positional delta, 3x EEF orientation delta in euler angle, 1x close gripper]. |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_dense |
Scalar |
|
bool |
True if state is a waypoint(010) or in dense mode(x111). |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image |
Image |
(240, 320, 3) |
uint8 |
Main camera RGB observation. |
steps/observation/state |
Tensor |
(27,) |
float32 |
Robot state, consists of [3x EEF position,4x EEF orientation in quaternion,3x EEF orientation in euler angle,7x robot joint angles, 7x robot joint velocities,3x gripper state. |
steps/observation/wrist_image |
Image |
(240, 320, 3) |
uint8 |
Wrist camera RGB observation. |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{belkhale2023hydra,
title={HYDRA: Hybrid Robot Actions for Imitation Learning},
author={Belkhale, Suneel and Cui, Yuchen and Sadigh, Dorsa},
journal={arxiv},
year={2023}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-11 UTC.
[null,null,["Last updated 2024-12-11 UTC."],[],[],null,["# stanford_hydra_dataset_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nFranka solving long-horizon tasks\n\n- **Homepage** :\n \u003chttps://sites.google.com/view/hydra-il-2023\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.StanfordHydraDatasetConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `72.48 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 570 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(7,), dtype=float32, description=Robot action, consists of [3x EEF positional delta, 3x EEF orientation delta in euler angle, 1x close gripper].),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_dense': Scalar(shape=(), dtype=bool, description=True if state is a waypoint(010) or in dense mode(x111).),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'image': Image(shape=(240, 320, 3), dtype=uint8, description=Main camera RGB observation.),\n 'state': Tensor(shape=(27,), dtype=float32, description=Robot state, consists of [3x EEF position,4x EEF orientation in quaternion,3x EEF orientation in euler angle,7x robot joint angles, 7x robot joint velocities,3x gripper state.),\n 'wrist_image': Image(shape=(240, 320, 3), dtype=uint8, description=Wrist camera RGB observation.),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-------------------------------|--------------|---------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (7,) | float32 | Robot action, consists of \\[3x EEF positional delta, 3x EEF orientation delta in euler angle, 1x close gripper\\]. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_dense | Scalar | | bool | True if state is a waypoint(010) or in dense mode(x111). |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | Image | (240, 320, 3) | uint8 | Main camera RGB observation. |\n| steps/observation/state | Tensor | (27,) | float32 | Robot state, consists of \\[3x EEF position,4x EEF orientation in quaternion,3x EEF orientation in euler angle,7x robot joint angles, 7x robot joint velocities,3x gripper state. |\n| steps/observation/wrist_image | Image | (240, 320, 3) | uint8 | Wrist camera RGB observation. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{belkhale2023hydra,\n title={HYDRA: Hybrid Robot Actions for Imitation Learning},\n author={Belkhale, Suneel and Cui, Yuchen and Sadigh, Dorsa},\n journal={arxiv},\n year={2023}\n }"]]