mimic_play
Stay organized with collections
Save and categorize content based on your preferences.
Real dataset of 14 long horizon manipulation tasks. A mix of human play data and
single robot arm data performing the same tasks.
Split |
Examples |
'train' |
378 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': string,
}),
'steps': Dataset({
'action': Tensor(shape=(7,), dtype=float32),
'discount': Scalar(shape=(), dtype=float32),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32),
'language_instruction': string,
'observation': FeaturesDict({
'image': FeaturesDict({
'front_image_1': Image(shape=(120, 120, 3), dtype=uint8),
'front_image_2': Image(shape=(120, 120, 3), dtype=uint8),
}),
'state': FeaturesDict({
'ee_pose': Tensor(shape=(7,), dtype=float32),
'gripper_position': float32,
'joint_positions': Tensor(shape=(7,), dtype=float32),
'joint_velocities': Tensor(shape=(7,), dtype=float32),
}),
'wrist_image': FeaturesDict({
'wrist_image': Image(shape=(120, 120, 3), dtype=uint8),
}),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Tensor |
|
string |
|
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(7,) |
float32 |
|
steps/discount |
Scalar |
|
float32 |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
|
steps/language_instruction |
Tensor |
|
string |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image |
FeaturesDict |
|
|
|
steps/observation/image/front_image_1 |
Image |
(120, 120, 3) |
uint8 |
|
steps/observation/image/front_image_2 |
Image |
(120, 120, 3) |
uint8 |
|
steps/observation/state |
FeaturesDict |
|
|
|
steps/observation/state/ee_pose |
Tensor |
(7,) |
float32 |
|
steps/observation/state/gripper_position |
Tensor |
|
float32 |
|
steps/observation/state/joint_positions |
Tensor |
(7,) |
float32 |
|
steps/observation/state/joint_velocities |
Tensor |
(7,) |
float32 |
|
steps/observation/wrist_image |
FeaturesDict |
|
|
|
steps/observation/wrist_image/wrist_image |
Image |
(120, 120, 3) |
uint8 |
|
steps/reward |
Scalar |
|
float32 |
|
@article{wang2023mimicplay,title={Mimicplay: Long-horizon imitation learning by watching human play},author={Wang, Chen and Fan, Linxi and Sun, Jiankai and Zhang, Ruohan and Fei-Fei, Li and Xu, Danfei and Zhu, Yuke and Anandkumar, Anima},journal={arXiv preprint arXiv:2302.12422},year={2023} }
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-05-31 UTC.
[null,null,["Last updated 2024-05-31 UTC."],[],[],null,["# mimic_play\n\n\u003cbr /\u003e\n\n- **Description**:\n\nReal dataset of 14 long horizon manipulation tasks. A mix of human play data and\nsingle robot arm data performing the same tasks.\n\n- **Homepage** : \u003chttps://mimic-play.github.io/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.MimicPlay`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `7.14 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 378 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': string,\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(7,), dtype=float32),\n 'discount': Scalar(shape=(), dtype=float32),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32),\n 'language_instruction': string,\n 'observation': FeaturesDict({\n 'image': FeaturesDict({\n 'front_image_1': Image(shape=(120, 120, 3), dtype=uint8),\n 'front_image_2': Image(shape=(120, 120, 3), dtype=uint8),\n }),\n 'state': FeaturesDict({\n 'ee_pose': Tensor(shape=(7,), dtype=float32),\n 'gripper_position': float32,\n 'joint_positions': Tensor(shape=(7,), dtype=float32),\n 'joint_velocities': Tensor(shape=(7,), dtype=float32),\n }),\n 'wrist_image': FeaturesDict({\n 'wrist_image': Image(shape=(120, 120, 3), dtype=uint8),\n }),\n }),\n 'reward': Scalar(shape=(), dtype=float32),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-------------------------------------------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Tensor | | string | |\n| steps | Dataset | | | |\n| steps/action | Tensor | (7,) | float32 | |\n| steps/discount | Scalar | | float32 | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | |\n| steps/language_instruction | Tensor | | string | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | FeaturesDict | | | |\n| steps/observation/image/front_image_1 | Image | (120, 120, 3) | uint8 | |\n| steps/observation/image/front_image_2 | Image | (120, 120, 3) | uint8 | |\n| steps/observation/state | FeaturesDict | | | |\n| steps/observation/state/ee_pose | Tensor | (7,) | float32 | |\n| steps/observation/state/gripper_position | Tensor | | float32 | |\n| steps/observation/state/joint_positions | Tensor | (7,) | float32 | |\n| steps/observation/state/joint_velocities | Tensor | (7,) | float32 | |\n| steps/observation/wrist_image | FeaturesDict | | | |\n| steps/observation/wrist_image/wrist_image | Image | (120, 120, 3) | uint8 | |\n| steps/reward | Scalar | | float32 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{wang2023mimicplay,title={Mimicplay: Long-horizon imitation learning by watching human play},author={Wang, Chen and Fan, Linxi and Sun, Jiankai and Zhang, Ruohan and Fei-Fei, Li and Xu, Danfei and Zhu, Yuke and Anandkumar, Anima},journal={arXiv preprint arXiv:2302.12422},year={2023} }"]]