uiuc_d3field
Stay organized with collections
Save and categorize content based on your preferences.
Organizing office desk, utensils etc
Split |
Examples |
'train' |
192 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(3,), dtype=float32, description=Robot displacement from last frame),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'depth_1': Image(shape=(360, 640, 1), dtype=uint16, description=camera 1 depth observation.),
'depth_2': Image(shape=(360, 640, 1), dtype=uint16, description=camera 2 depth observation.),
'depth_3': Image(shape=(360, 640, 1), dtype=uint16, description=camera 3 depth observation.),
'depth_4': Image(shape=(360, 640, 1), dtype=uint16, description=camera 4 depth observation.),
'image_1': Image(shape=(360, 640, 3), dtype=uint8, description=camera 1 RGB observation.),
'image_2': Image(shape=(360, 640, 3), dtype=uint8, description=camera 2 RGB observation.),
'image_3': Image(shape=(360, 640, 3), dtype=uint8, description=camera 3 RGB observation.),
'image_4': Image(shape=(360, 640, 3), dtype=uint8, description=camera 4 RGB observation.),
'state': Tensor(shape=(4, 4), dtype=float32, description=Robot end-effector state),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(3,) |
float32 |
Robot displacement from last frame |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/depth_1 |
Image |
(360, 640, 1) |
uint16 |
camera 1 depth observation. |
steps/observation/depth_2 |
Image |
(360, 640, 1) |
uint16 |
camera 2 depth observation. |
steps/observation/depth_3 |
Image |
(360, 640, 1) |
uint16 |
camera 3 depth observation. |
steps/observation/depth_4 |
Image |
(360, 640, 1) |
uint16 |
camera 4 depth observation. |
steps/observation/image_1 |
Image |
(360, 640, 3) |
uint8 |
camera 1 RGB observation. |
steps/observation/image_2 |
Image |
(360, 640, 3) |
uint8 |
camera 2 RGB observation. |
steps/observation/image_3 |
Image |
(360, 640, 3) |
uint8 |
camera 3 RGB observation. |
steps/observation/image_4 |
Image |
(360, 640, 3) |
uint8 |
camera 4 RGB observation. |
steps/observation/state |
Tensor |
(4, 4) |
float32 |
Robot end-effector state |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{wang2023d3field,
title={D^3Field: Dynamic 3D Descriptor Fields for Generalizable Robotic Manipulation},
author={Wang, Yixuan and Li, Zhuoran and Zhang, Mingtong and Driggs-Campbell, Katherine and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu},
journal={arXiv preprint arXiv:},
year={2023},
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-11 UTC.
[null,null,["Last updated 2024-12-11 UTC."],[],[],null,["# uiuc_d3field\n\n\u003cbr /\u003e\n\n- **Description**:\n\nOrganizing office desk, utensils etc\n\n- **Homepage** :\n \u003chttps://robopil.github.io/d3fields/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.UiucD3field`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `15.82 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 192 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(3,), dtype=float32, description=Robot displacement from last frame),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'depth_1': Image(shape=(360, 640, 1), dtype=uint16, description=camera 1 depth observation.),\n 'depth_2': Image(shape=(360, 640, 1), dtype=uint16, description=camera 2 depth observation.),\n 'depth_3': Image(shape=(360, 640, 1), dtype=uint16, description=camera 3 depth observation.),\n 'depth_4': Image(shape=(360, 640, 1), dtype=uint16, description=camera 4 depth observation.),\n 'image_1': Image(shape=(360, 640, 3), dtype=uint8, description=camera 1 RGB observation.),\n 'image_2': Image(shape=(360, 640, 3), dtype=uint8, description=camera 2 RGB observation.),\n 'image_3': Image(shape=(360, 640, 3), dtype=uint8, description=camera 3 RGB observation.),\n 'image_4': Image(shape=(360, 640, 3), dtype=uint8, description=camera 4 RGB observation.),\n 'state': Tensor(shape=(4, 4), dtype=float32, description=Robot end-effector state),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------------------------|--------------|---------------|---------|--------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (3,) | float32 | Robot displacement from last frame |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/depth_1 | Image | (360, 640, 1) | uint16 | camera 1 depth observation. |\n| steps/observation/depth_2 | Image | (360, 640, 1) | uint16 | camera 2 depth observation. |\n| steps/observation/depth_3 | Image | (360, 640, 1) | uint16 | camera 3 depth observation. |\n| steps/observation/depth_4 | Image | (360, 640, 1) | uint16 | camera 4 depth observation. |\n| steps/observation/image_1 | Image | (360, 640, 3) | uint8 | camera 1 RGB observation. |\n| steps/observation/image_2 | Image | (360, 640, 3) | uint8 | camera 2 RGB observation. |\n| steps/observation/image_3 | Image | (360, 640, 3) | uint8 | camera 3 RGB observation. |\n| steps/observation/image_4 | Image | (360, 640, 3) | uint8 | camera 4 RGB observation. |\n| steps/observation/state | Tensor | (4, 4) | float32 | Robot end-effector state |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{wang2023d3field,\n title={D^3Field: Dynamic 3D Descriptor Fields for Generalizable Robotic Manipulation},\n author={Wang, Yixuan and Li, Zhuoran and Zhang, Mingtong and Driggs-Campbell, Katherine and Wu, Jiajun and Fei-Fei, Li and Li, Yunzhu},\n journal={arXiv preprint arXiv:},\n year={2023},\n }"]]