nyu_franka_play_dataset_converted_externally_to_rlds
Stay organized with collections
Save and categorize content based on your preferences.
Franka interacting with toy kitchens
Split |
Examples |
'train' |
365 |
'val' |
91 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(15,), dtype=float32, description=Robot action, consists of [7x joint velocities, 3x EE delta xyz, 3x EE delta rpy, 1x gripper position, 1x terminate episode].),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'depth': Tensor(shape=(128, 128, 1), dtype=int32, description=Right camera depth observation.),
'depth_additional_view': Tensor(shape=(128, 128, 1), dtype=int32, description=Left camera depth observation.),
'image': Image(shape=(128, 128, 3), dtype=uint8, description=Right camera RGB observation.),
'image_additional_view': Image(shape=(128, 128, 3), dtype=uint8, description=Left camera RGB observation.),
'state': Tensor(shape=(13,), dtype=float32, description=Robot state, consists of [7x robot joint angles, 3x EE xyz, 3x EE rpy.),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(15,) |
float32 |
Robot action, consists of [7x joint velocities, 3x EE delta xyz, 3x EE delta rpy, 1x gripper position, 1x terminate episode]. |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/depth |
Tensor |
(128, 128, 1) |
int32 |
Right camera depth observation. |
steps/observation/depth_additional_view |
Tensor |
(128, 128, 1) |
int32 |
Left camera depth observation. |
steps/observation/image |
Image |
(128, 128, 3) |
uint8 |
Right camera RGB observation. |
steps/observation/image_additional_view |
Image |
(128, 128, 3) |
uint8 |
Left camera RGB observation. |
steps/observation/state |
Tensor |
(13,) |
float32 |
Robot state, consists of [7x robot joint angles, 3x EE xyz, 3x EE rpy. |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{cui2022play,
title = {From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data},
author = {Cui, Zichen Jeff and Wang, Yibin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
journal = {arXiv preprint arXiv:2210.10047},
year = {2022}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-03 UTC.
[null,null,["Last updated 2024-09-03 UTC."],[],[],null,["# nyu_franka_play_dataset_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nFranka interacting with toy kitchens\n\n- **Homepage** :\n \u003chttps://play-to-policy.github.io/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.NyuFrankaPlayDatasetConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `5.18 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 365 |\n| `'val'` | 91 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(15,), dtype=float32, description=Robot action, consists of [7x joint velocities, 3x EE delta xyz, 3x EE delta rpy, 1x gripper position, 1x terminate episode].),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'depth': Tensor(shape=(128, 128, 1), dtype=int32, description=Right camera depth observation.),\n 'depth_additional_view': Tensor(shape=(128, 128, 1), dtype=int32, description=Left camera depth observation.),\n 'image': Image(shape=(128, 128, 3), dtype=uint8, description=Right camera RGB observation.),\n 'image_additional_view': Image(shape=(128, 128, 3), dtype=uint8, description=Left camera RGB observation.),\n 'state': Tensor(shape=(13,), dtype=float32, description=Robot state, consists of [7x robot joint angles, 3x EE xyz, 3x EE rpy.),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------------------------------------|--------------|---------------|---------|---------------------------------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (15,) | float32 | Robot action, consists of \\[7x joint velocities, 3x EE delta xyz, 3x EE delta rpy, 1x gripper position, 1x terminate episode\\]. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/depth | Tensor | (128, 128, 1) | int32 | Right camera depth observation. |\n| steps/observation/depth_additional_view | Tensor | (128, 128, 1) | int32 | Left camera depth observation. |\n| steps/observation/image | Image | (128, 128, 3) | uint8 | Right camera RGB observation. |\n| steps/observation/image_additional_view | Image | (128, 128, 3) | uint8 | Left camera RGB observation. |\n| steps/observation/state | Tensor | (13,) | float32 | Robot state, consists of \\[7x robot joint angles, 3x EE xyz, 3x EE rpy. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{cui2022play,\n title = {From Play to Policy: Conditional Behavior Generation from Uncurated Robot Data},\n author = {Cui, Zichen Jeff and Wang, Yibin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},\n journal = {arXiv preprint arXiv:2210.10047},\n year = {2022}\n }"]]