berkeley_fanuc_manipulation
Stay organized with collections
Save and categorize content based on your preferences.
Fanuc robot performing various manipulation tasks
Split |
Examples |
'train' |
415 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32, description=Robot action, consists of [dx, dy, dz] and [droll, dpitch, dyaw]),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'end_effector_state': Tensor(shape=(7,), dtype=float32, description=Robot gripper end effector state, consists of [x, y, z] and 4x quaternion),
'image': Image(shape=(224, 224, 3), dtype=uint8, description=Main camera RGB observation.),
'state': Tensor(shape=(13,), dtype=float32, description=Robot joints state, consists of [6x robot joint angles, 1x gripper open status, 6x robot joint velocities].),
'wrist_image': Image(shape=(224, 224, 3), dtype=uint8, description=Wrist camera RGB observation.),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(6,) |
float32 |
Robot action, consists of [dx, dy, dz] and [droll, dpitch, dyaw] |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/end_effector_state |
Tensor |
(7,) |
float32 |
Robot gripper end effector state, consists of [x, y, z] and 4x quaternion |
steps/observation/image |
Image |
(224, 224, 3) |
uint8 |
Main camera RGB observation. |
steps/observation/state |
Tensor |
(13,) |
float32 |
Robot joints state, consists of [6x robot joint angles, 1x gripper open status, 6x robot joint velocities]. |
steps/observation/wrist_image |
Image |
(224, 224, 3) |
uint8 |
Wrist camera RGB observation. |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{fanuc_manipulation2023,
title={Fanuc Manipulation: A Dataset for Learning-based Manipulation with FANUC Mate 200iD Robot},
author={Zhu, Xinghao and Tian, Ran and Xu, Chenfeng and Ding, Mingyu and Zhan, Wei and Tomizuka, Masayoshi},
year={2023},
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-03 UTC.
[null,null,["Last updated 2024-09-03 UTC."],[],[],null,["# berkeley_fanuc_manipulation\n\n\u003cbr /\u003e\n\n- **Description**:\n\nFanuc robot performing various manipulation tasks\n\n- **Homepage** :\n \u003chttps://sites.google.com/berkeley.edu/fanuc-manipulation\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.BerkeleyFanucManipulation`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `8.85 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 415 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(6,), dtype=float32, description=Robot action, consists of [dx, dy, dz] and [droll, dpitch, dyaw]),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'end_effector_state': Tensor(shape=(7,), dtype=float32, description=Robot gripper end effector state, consists of [x, y, z] and 4x quaternion),\n 'image': Image(shape=(224, 224, 3), dtype=uint8, description=Main camera RGB observation.),\n 'state': Tensor(shape=(13,), dtype=float32, description=Robot joints state, consists of [6x robot joint angles, 1x gripper open status, 6x robot joint velocities].),\n 'wrist_image': Image(shape=(224, 224, 3), dtype=uint8, description=Wrist camera RGB observation.),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------------------------|--------------|---------------|---------|---------------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (6,) | float32 | Robot action, consists of \\[dx, dy, dz\\] and \\[droll, dpitch, dyaw\\] |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/end_effector_state | Tensor | (7,) | float32 | Robot gripper end effector state, consists of \\[x, y, z\\] and 4x quaternion |\n| steps/observation/image | Image | (224, 224, 3) | uint8 | Main camera RGB observation. |\n| steps/observation/state | Tensor | (13,) | float32 | Robot joints state, consists of \\[6x robot joint angles, 1x gripper open status, 6x robot joint velocities\\]. |\n| steps/observation/wrist_image | Image | (224, 224, 3) | uint8 | Wrist camera RGB observation. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{fanuc_manipulation2023,\n title={Fanuc Manipulation: A Dataset for Learning-based Manipulation with FANUC Mate 200iD Robot},\n author={Zhu, Xinghao and Tian, Ran and Xu, Chenfeng and Ding, Mingyu and Zhan, Wei and Tomizuka, Masayoshi},\n year={2023},\n }"]]