toto
Stay organized with collections
Save and categorize content based on your preferences.
Franka scooping and pouring tasks
FeaturesDict({
'steps': Dataset({
'action': FeaturesDict({
'open_gripper': bool,
'rotation_delta': Tensor(shape=(3,), dtype=float32),
'terminate_episode': float32,
'world_vector': Tensor(shape=(3,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': FeaturesDict({
'image': Image(shape=(480, 640, 3), dtype=uint8),
'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
'natural_language_instruction': string,
'state': Tensor(shape=(7,), dtype=float32, description=numpy array of shape (7,). Contains the robot joint states (as absolute joint angles) at each timestep),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
steps |
Dataset |
|
|
|
steps/action |
FeaturesDict |
|
|
|
steps/action/open_gripper |
Tensor |
|
bool |
|
steps/action/rotation_delta |
Tensor |
(3,) |
float32 |
|
steps/action/terminate_episode |
Tensor |
|
float32 |
|
steps/action/world_vector |
Tensor |
(3,) |
float32 |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image |
Image |
(480, 640, 3) |
uint8 |
|
steps/observation/natural_language_embedding |
Tensor |
(512,) |
float32 |
|
steps/observation/natural_language_instruction |
Tensor |
|
string |
|
steps/observation/state |
Tensor |
(7,) |
float32 |
numpy array of shape (7,). Contains the robot joint states (as absolute joint angles) at each timestep |
steps/reward |
Scalar |
|
float32 |
|
@inproceedings{zhou2023train,
author={Zhou, Gaoyue and Dean, Victoria and Srirama, Mohan Kumar and Rajeswaran, Aravind and Pari, Jyothish and Hatch, Kyle and Jain, Aryan and Yu, Tianhe and Abbeel, Pieter and Pinto, Lerrel and Finn, Chelsea and Gupta, Abhinav},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
title={Train Offline, Test Online: A Real Robot Learning Benchmark},
year={2023},
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-03 UTC.
[null,null,["Last updated 2024-09-03 UTC."],[],[],null,["# toto\n\n\u003cbr /\u003e\n\n- **Description**:\n\nFranka scooping and pouring tasks\n\n- **Homepage** : \u003chttps://toto-benchmark.org/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.Toto`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `Unknown size`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n Unknown\n\n- **Splits**:\n\n| Split | Examples |\n|-------|----------|\n\n- **Feature structure**:\n\n FeaturesDict({\n 'steps': Dataset({\n 'action': FeaturesDict({\n 'open_gripper': bool,\n 'rotation_delta': Tensor(shape=(3,), dtype=float32),\n 'terminate_episode': float32,\n 'world_vector': Tensor(shape=(3,), dtype=float32),\n }),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'observation': FeaturesDict({\n 'image': Image(shape=(480, 640, 3), dtype=uint8),\n 'natural_language_embedding': Tensor(shape=(512,), dtype=float32),\n 'natural_language_instruction': string,\n 'state': Tensor(shape=(7,), dtype=float32, description=numpy array of shape (7,). Contains the robot joint states (as absolute joint angles) at each timestep),\n }),\n 'reward': Scalar(shape=(), dtype=float32),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|------------------------------------------------|--------------|---------------|---------|--------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| steps | Dataset | | | |\n| steps/action | FeaturesDict | | | |\n| steps/action/open_gripper | Tensor | | bool | |\n| steps/action/rotation_delta | Tensor | (3,) | float32 | |\n| steps/action/terminate_episode | Tensor | | float32 | |\n| steps/action/world_vector | Tensor | (3,) | float32 | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | Image | (480, 640, 3) | uint8 | |\n| steps/observation/natural_language_embedding | Tensor | (512,) | float32 | |\n| steps/observation/natural_language_instruction | Tensor | | string | |\n| steps/observation/state | Tensor | (7,) | float32 | numpy array of shape (7,). Contains the robot joint states (as absolute joint angles) at each timestep |\n| steps/reward | Scalar | | float32 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n Missing.\n\n- **Citation**:\n\n @inproceedings{zhou2023train,\n author={Zhou, Gaoyue and Dean, Victoria and Srirama, Mohan Kumar and Rajeswaran, Aravind and Pari, Jyothish and Hatch, Kyle and Jain, Aryan and Yu, Tianhe and Abbeel, Pieter and Pinto, Lerrel and Finn, Chelsea and Gupta, Abhinav},\n booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},\n title={Train Offline, Test Online: A Real Robot Learning Benchmark},\n year={2023},\n }"]]