io_ai_tech
Stay organized with collections
Save and categorize content based on your preferences.
Split |
Examples |
'train' |
3,847 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': string,
}),
'steps': Dataset({
'action': Tensor(shape=(7,), dtype=float32),
'discount': Scalar(shape=(), dtype=float32),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32),
'language_instruction': string,
'observation': FeaturesDict({
'depth': Image(shape=(720, 1280, 1), dtype=uint8),
'fisheye_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),
'fisheye_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),
'image': Image(shape=(360, 640, 3), dtype=uint8),
'image_fisheye': Image(shape=(640, 800, 3), dtype=uint8),
'image_left_side': Image(shape=(360, 640, 3), dtype=uint8),
'image_right_side': Image(shape=(360, 640, 3), dtype=uint8),
'left_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),
'left_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),
'main_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),
'right_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),
'right_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),
'state': Tensor(shape=(8,), dtype=float32),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Tensor |
|
string |
|
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(7,) |
float32 |
|
steps/discount |
Scalar |
|
float32 |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
|
steps/language_instruction |
Tensor |
|
string |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/depth |
Image |
(720, 1280, 1) |
uint8 |
|
steps/observation/fisheye_camera_extrinsic |
Tensor |
(4, 4) |
float32 |
|
steps/observation/fisheye_camera_intrinsic |
Tensor |
(3, 3) |
float32 |
|
steps/observation/image |
Image |
(360, 640, 3) |
uint8 |
|
steps/observation/image_fisheye |
Image |
(640, 800, 3) |
uint8 |
|
steps/observation/image_left_side |
Image |
(360, 640, 3) |
uint8 |
|
steps/observation/image_right_side |
Image |
(360, 640, 3) |
uint8 |
|
steps/observation/left_camera_extrinsic |
Tensor |
(4, 4) |
float32 |
|
steps/observation/left_camera_intrinsic |
Tensor |
(3, 3) |
float32 |
|
steps/observation/main_camera_intrinsic |
Tensor |
(3, 3) |
float32 |
|
steps/observation/right_camera_extrinsic |
Tensor |
(4, 4) |
float32 |
|
steps/observation/right_camera_intrinsic |
Tensor |
(3, 3) |
float32 |
|
steps/observation/state |
Tensor |
(8,) |
float32 |
|
steps/reward |
Scalar |
|
float32 |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-05-31 UTC.
[null,null,["Last updated 2024-05-31 UTC."],[],[],null,["# io_ai_tech\n\n\u003cbr /\u003e\n\n- **Description**:\n\n- **Homepage** :\n \u003chttps://github.com/ioai-tech/rlds_dataset_builder\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.IoAiTech`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `89.63 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 3,847 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': string,\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(7,), dtype=float32),\n 'discount': Scalar(shape=(), dtype=float32),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32),\n 'language_instruction': string,\n 'observation': FeaturesDict({\n 'depth': Image(shape=(720, 1280, 1), dtype=uint8),\n 'fisheye_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),\n 'fisheye_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),\n 'image': Image(shape=(360, 640, 3), dtype=uint8),\n 'image_fisheye': Image(shape=(640, 800, 3), dtype=uint8),\n 'image_left_side': Image(shape=(360, 640, 3), dtype=uint8),\n 'image_right_side': Image(shape=(360, 640, 3), dtype=uint8),\n 'left_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),\n 'left_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),\n 'main_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),\n 'right_camera_extrinsic': Tensor(shape=(4, 4), dtype=float32),\n 'right_camera_intrinsic': Tensor(shape=(3, 3), dtype=float32),\n 'state': Tensor(shape=(8,), dtype=float32),\n }),\n 'reward': Scalar(shape=(), dtype=float32),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------------------------------|--------------|----------------|---------|-------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Tensor | | string | |\n| steps | Dataset | | | |\n| steps/action | Tensor | (7,) | float32 | |\n| steps/discount | Scalar | | float32 | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | |\n| steps/language_instruction | Tensor | | string | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/depth | Image | (720, 1280, 1) | uint8 | |\n| steps/observation/fisheye_camera_extrinsic | Tensor | (4, 4) | float32 | |\n| steps/observation/fisheye_camera_intrinsic | Tensor | (3, 3) | float32 | |\n| steps/observation/image | Image | (360, 640, 3) | uint8 | |\n| steps/observation/image_fisheye | Image | (640, 800, 3) | uint8 | |\n| steps/observation/image_left_side | Image | (360, 640, 3) | uint8 | |\n| steps/observation/image_right_side | Image | (360, 640, 3) | uint8 | |\n| steps/observation/left_camera_extrinsic | Tensor | (4, 4) | float32 | |\n| steps/observation/left_camera_intrinsic | Tensor | (3, 3) | float32 | |\n| steps/observation/main_camera_intrinsic | Tensor | (3, 3) | float32 | |\n| steps/observation/right_camera_extrinsic | Tensor | (4, 4) | float32 | |\n| steps/observation/right_camera_intrinsic | Tensor | (3, 3) | float32 | |\n| steps/observation/state | Tensor | (8,) | float32 | |\n| steps/reward | Scalar | | float32 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:"]]