asu_table_top_converted_externally_to_rlds
با مجموعهها، منظم بمانید
ذخیره و طبقهبندی محتوا براساس اولویتهای شما.
UR5 کارهای انتخاب/مکان/چرخش روی میز را انجام می دهد
تقسیم کنید | نمونه ها |
---|
'train' | 110 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(7,), dtype=float32, description=Robot action, consists of [7x joint velocities, 2x gripper velocities, 1x terminate episode].),
'action_delta': Tensor(shape=(7,), dtype=float32, description=Robot delta action, consists of [7x joint velocities, 2x gripper velocities, 1x terminate episode].),
'action_inst': Text(shape=(), dtype=string),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'goal_object': Text(shape=(), dtype=string),
'ground_truth_states': FeaturesDict({
'EE': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'bottle': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'bread': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'coke': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'cube': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'milk': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
'pepsi': Tensor(shape=(6,), dtype=float32, description=xyzrpy),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'image': Image(shape=(224, 224, 3), dtype=uint8, description=Main camera RGB observation.),
'state': Tensor(shape=(7,), dtype=float32, description=Robot state, consists of [6x robot joint angles, 1x gripper position].),
'state_vel': Tensor(shape=(7,), dtype=float32, description=Robot joint velocity, consists of [6x robot joint angles, 1x gripper position].),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
ویژگی | کلاس | شکل | نوع D | توضیحات |
---|
| FeaturesDict | | | |
episode_metadata | FeaturesDict | | | |
episode_metadata/file_path | متن | | رشته | مسیر فایل داده اصلی |
مراحل | مجموعه داده | | | |
مراحل/عمل | تانسور | (7،) | float32 | عمل ربات، شامل [7 برابر سرعت مشترک، 2 برابر سرعت گیره، 1 برابر قسمت پایانی] است. |
Steps/action_delta | تانسور | (7،) | float32 | عملکرد دلتای ربات، شامل [7 برابر سرعت مشترک، 2 برابر سرعت گیره، 1 برابر قسمت پایانی] است. |
Steps/action_inst | متن | | رشته | اقدامی که باید انجام شود. |
مراحل/تخفیف | اسکالر | | float32 | تخفیف در صورت ارائه، پیش فرض 1 است. |
Steps/goal_object | متن | | رشته | شیئی که باید با آن دستکاری شود. |
مراحل/حالات_حقیقت_زمینی | FeaturesDict | | | |
Steps/ground_truth_states/EE | تانسور | (6،) | float32 | xyzrpy |
Steps/ground_truth_states/Bottle | تانسور | (6،) | float32 | xyzrpy |
مراحل/زمین_حقیقت_حالات/نان | تانسور | (6،) | float32 | xyzrpy |
Steps/ground_truth_states/coke | تانسور | (6،) | float32 | xyzrpy |
Steps/ground_truth_states/cube | تانسور | (6،) | float32 | xyzrpy |
مراحل/زمین_حقیقت_حالات/شیر | تانسور | (6،) | float32 | xyzrpy |
Steps/ground_truth_states/pepsi | تانسور | (6،) | float32 | xyzrpy |
Steps/is_first | تانسور | | بوول | |
Steps/is_last | تانسور | | بوول | |
Steps/is_terminal | تانسور | | بوول | |
Steps/language_embedding | تانسور | (512،) | float32 | تعبیه زبان کونا. به https://tfhub.dev/google/universal-sentence-encoder-large/5 مراجعه کنید |
مراحل/زبان_آموزش | متن | | رشته | آموزش زبان. |
مراحل / مشاهده | FeaturesDict | | | |
مراحل / مشاهده / تصویر | تصویر | (224، 224، 3) | uint8 | دوربین اصلی رصد RGB. |
مراحل / مشاهده / حالت | تانسور | (7،) | float32 | حالت ربات، شامل [6x زاویه مفصل ربات، 1x موقعیت دستگیره] است. |
steps/observation/state_vel | تانسور | (7،) | float32 | سرعت اتصال ربات، شامل [6x زاویه مفصل ربات، 1x موقعیت گیره] است. |
مراحل/پاداش | اسکالر | | float32 | در صورت ارائه پاداش، 1 در مرحله آخر برای دموها. |
@inproceedings{zhou2023modularity,
title={Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation},
author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Stepputtis, Simon and Amor, Heni},
booktitle={Conference on Robot Learning},
pages={1684--1695},
year={2023},
organization={PMLR}
}
@article{zhou2023learning,
title={Learning modular language-conditioned robot policies through attention},
author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Ben Amor, Heni and Stepputtis, Simon},
journal={Autonomous Robots},
pages={1--21},
year={2023},
publisher={Springer}
}
جز در مواردی که غیر از این ذکر شده باشد،محتوای این صفحه تحت مجوز Creative Commons Attribution 4.0 License است. نمونه کدها نیز دارای مجوز Apache 2.0 License است. برای اطلاع از جزئیات، به خطمشیهای سایت Google Developers مراجعه کنید. جاوا علامت تجاری ثبتشده Oracle و/یا شرکتهای وابسته به آن است.
تاریخ آخرین بهروزرسانی 2024-09-04 بهوقت ساعت هماهنگ جهانی.
[null,null,["تاریخ آخرین بهروزرسانی 2024-09-04 بهوقت ساعت هماهنگ جهانی."],[],[],null,["# asu_table_top_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nUR5 performing table-top pick/place/rotate tasks\n\n- **Homepage** :\n \u003chttps://link.springer.com/article/10.1007/s10514-023-10129-1\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.AsuTableTopConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `737.60 MiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 110 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(7,), dtype=float32, description=Robot action, consists of [7x joint velocities, 2x gripper velocities, 1x terminate episode].),\n 'action_delta': Tensor(shape=(7,), dtype=float32, description=Robot delta action, consists of [7x joint velocities, 2x gripper velocities, 1x terminate episode].),\n 'action_inst': Text(shape=(), dtype=string),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'goal_object': Text(shape=(), dtype=string),\n 'ground_truth_states': FeaturesDict({\n 'EE': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'bottle': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'bread': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'coke': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'cube': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'milk': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n 'pepsi': Tensor(shape=(6,), dtype=float32, description=xyzrpy),\n }),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'image': Image(shape=(224, 224, 3), dtype=uint8, description=Main camera RGB observation.),\n 'state': Tensor(shape=(7,), dtype=float32, description=Robot state, consists of [6x robot joint angles, 1x gripper position].),\n 'state_vel': Tensor(shape=(7,), dtype=float32, description=Robot joint velocity, consists of [6x robot joint angles, 1x gripper position].),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------------------------------|--------------|---------------|---------|-------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (7,) | float32 | Robot action, consists of \\[7x joint velocities, 2x gripper velocities, 1x terminate episode\\]. |\n| steps/action_delta | Tensor | (7,) | float32 | Robot delta action, consists of \\[7x joint velocities, 2x gripper velocities, 1x terminate episode\\]. |\n| steps/action_inst | Text | | string | Action to be performed. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/goal_object | Text | | string | Object to be manipulated with. |\n| steps/ground_truth_states | FeaturesDict | | | |\n| steps/ground_truth_states/EE | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/bottle | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/bread | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/coke | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/cube | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/milk | Tensor | (6,) | float32 | xyzrpy |\n| steps/ground_truth_states/pepsi | Tensor | (6,) | float32 | xyzrpy |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | Image | (224, 224, 3) | uint8 | Main camera RGB observation. |\n| steps/observation/state | Tensor | (7,) | float32 | Robot state, consists of \\[6x robot joint angles, 1x gripper position\\]. |\n| steps/observation/state_vel | Tensor | (7,) | float32 | Robot joint velocity, consists of \\[6x robot joint angles, 1x gripper position\\]. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{zhou2023modularity,\n title={Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation},\n author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Stepputtis, Simon and Amor, Heni},\n booktitle={Conference on Robot Learning},\n pages={1684--1695},\n year={2023},\n organization={PMLR}\n }\n @article{zhou2023learning,\n title={Learning modular language-conditioned robot policies through attention},\n author={Zhou, Yifan and Sonawani, Shubham and Phielipp, Mariano and Ben Amor, Heni and Stepputtis, Simon},\n journal={Autonomous Robots},\n pages={1--21},\n year={2023},\n publisher={Springer}\n }"]]