kuka
Stay organized with collections
Save and categorize content based on your preferences.
Bin picking and rearrangement tasks
Split |
Examples |
'train' |
580,392 |
FeaturesDict({
'steps': Dataset({
'action': FeaturesDict({
'base_displacement_vector': Tensor(shape=(2,), dtype=float32),
'base_displacement_vertical_rotation': Tensor(shape=(1,), dtype=float32),
'gripper_closedness_action': Tensor(shape=(1,), dtype=float32),
'rotation_delta': Tensor(shape=(3,), dtype=float32),
'terminate_episode': Tensor(shape=(3,), dtype=int32),
'world_vector': Tensor(shape=(3,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': FeaturesDict({
'clip_function_input/base_pose_tool_reached': Tensor(shape=(7,), dtype=float32),
'clip_function_input/workspace_bounds': Tensor(shape=(3, 3), dtype=float32),
'gripper_closed': Tensor(shape=(1,), dtype=float32),
'height_to_bottom': Tensor(shape=(1,), dtype=float32),
'image': Image(shape=(512, 640, 3), dtype=uint8),
'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
'natural_language_instruction': string,
'task_id': Tensor(shape=(1,), dtype=float32),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
'success': bool,
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
steps |
Dataset |
|
|
|
steps/action |
FeaturesDict |
|
|
|
steps/action/base_displacement_vector |
Tensor |
(2,) |
float32 |
|
steps/action/base_displacement_vertical_rotation |
Tensor |
(1,) |
float32 |
|
steps/action/gripper_closedness_action |
Tensor |
(1,) |
float32 |
|
steps/action/rotation_delta |
Tensor |
(3,) |
float32 |
|
steps/action/terminate_episode |
Tensor |
(3,) |
int32 |
|
steps/action/world_vector |
Tensor |
(3,) |
float32 |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/clip_function_input/base_pose_tool_reached |
Tensor |
(7,) |
float32 |
|
steps/observation/clip_function_input/workspace_bounds |
Tensor |
(3, 3) |
float32 |
|
steps/observation/gripper_closed |
Tensor |
(1,) |
float32 |
|
steps/observation/height_to_bottom |
Tensor |
(1,) |
float32 |
|
steps/observation/image |
Image |
(512, 640, 3) |
uint8 |
|
steps/observation/natural_language_embedding |
Tensor |
(512,) |
float32 |
|
steps/observation/natural_language_instruction |
Tensor |
|
string |
|
steps/observation/task_id |
Tensor |
(1,) |
float32 |
|
steps/reward |
Scalar |
|
float32 |
|
success |
Tensor |
|
bool |
|
@article{kalashnikov2018qt,
title={Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation},
author={Kalashnikov, Dmitry and Irpan, Alex and Pastor, Peter and Ibarz, Julian and Herzog, Alexander and Jang, Eric and Quillen, Deirdre and Holly, Ethan and Kalakrishnan, Mrinal and Vanhoucke, Vincent and others},
journal={arXiv preprint arXiv:1806.10293},
year={2018}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-12-19 UTC.
[null,null,["Last updated 2023-12-19 UTC."],[],[],null,["# kuka\n\n\u003cbr /\u003e\n\n- **Description**:\n\nBin picking and rearrangement tasks\n\n- **Homepage** :\n \u003chttps://arxiv.org/abs/1806.10293\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.Kuka`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `779.81 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 580,392 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'steps': Dataset({\n 'action': FeaturesDict({\n 'base_displacement_vector': Tensor(shape=(2,), dtype=float32),\n 'base_displacement_vertical_rotation': Tensor(shape=(1,), dtype=float32),\n 'gripper_closedness_action': Tensor(shape=(1,), dtype=float32),\n 'rotation_delta': Tensor(shape=(3,), dtype=float32),\n 'terminate_episode': Tensor(shape=(3,), dtype=int32),\n 'world_vector': Tensor(shape=(3,), dtype=float32),\n }),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'observation': FeaturesDict({\n 'clip_function_input/base_pose_tool_reached': Tensor(shape=(7,), dtype=float32),\n 'clip_function_input/workspace_bounds': Tensor(shape=(3, 3), dtype=float32),\n 'gripper_closed': Tensor(shape=(1,), dtype=float32),\n 'height_to_bottom': Tensor(shape=(1,), dtype=float32),\n 'image': Image(shape=(512, 640, 3), dtype=uint8),\n 'natural_language_embedding': Tensor(shape=(512,), dtype=float32),\n 'natural_language_instruction': string,\n 'task_id': Tensor(shape=(1,), dtype=float32),\n }),\n 'reward': Scalar(shape=(), dtype=float32),\n }),\n 'success': bool,\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------------------------------------------------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| steps | Dataset | | | |\n| steps/action | FeaturesDict | | | |\n| steps/action/base_displacement_vector | Tensor | (2,) | float32 | |\n| steps/action/base_displacement_vertical_rotation | Tensor | (1,) | float32 | |\n| steps/action/gripper_closedness_action | Tensor | (1,) | float32 | |\n| steps/action/rotation_delta | Tensor | (3,) | float32 | |\n| steps/action/terminate_episode | Tensor | (3,) | int32 | |\n| steps/action/world_vector | Tensor | (3,) | float32 | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/clip_function_input/base_pose_tool_reached | Tensor | (7,) | float32 | |\n| steps/observation/clip_function_input/workspace_bounds | Tensor | (3, 3) | float32 | |\n| steps/observation/gripper_closed | Tensor | (1,) | float32 | |\n| steps/observation/height_to_bottom | Tensor | (1,) | float32 | |\n| steps/observation/image | Image | (512, 640, 3) | uint8 | |\n| steps/observation/natural_language_embedding | Tensor | (512,) | float32 | |\n| steps/observation/natural_language_instruction | Tensor | | string | |\n| steps/observation/task_id | Tensor | (1,) | float32 | |\n| steps/reward | Scalar | | float32 | |\n| success | Tensor | | bool | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{kalashnikov2018qt,\n title={Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation},\n author={Kalashnikov, Dmitry and Irpan, Alex and Pastor, Peter and Ibarz, Julian and Herzog, Alexander and Jang, Eric and Quillen, Deirdre and Holly, Ethan and Kalakrishnan, Mrinal and Vanhoucke, Vincent and others},\n journal={arXiv preprint arXiv:1806.10293},\n year={2018}\n }"]]