lost_and_found
Stay organized with collections
Save and categorize content based on your preferences.
The LostAndFound Dataset addresses the problem of detecting unexpected small
obstacles on the road often caused by lost cargo. The dataset comprises 112
stereo video sequences with 2104 annotated frames (picking roughly every tenth
frame from the recorded data).
The dataset is designed analogous to the 'Cityscapes' dataset. The datset
provides: - stereo image pairs in either 8 or 16 bit color resolution -
precomputed disparity maps - coarse semantic labels for objects and street
Descriptions of the labels are given here:
http://www.6d-vision.com/laf_table.pdf
Split |
Examples |
'test' |
1,203 |
'train' |
1,036 |
@inproceedings{pinggera2016lost,
title={Lost and found: detecting small road hazards for self-driving vehicles},
author={Pinggera, Peter and Ramos, Sebastian and Gehrig, Stefan and Franke, Uwe and Rother, Carsten and Mester, Rudolf},
booktitle={2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2016}
}
lost_and_found/semantic_segmentation (default config)
FeaturesDict({
'image_id': Text(shape=(), dtype=string),
'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),
'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
image_id |
Text |
|
string |
|
image_left |
Image |
(1024, 2048, 3) |
uint8 |
|
segmentation_label |
Image |
(1024, 2048, 1) |
uint8 |
|
lost_and_found/stereo_disparity
FeaturesDict({
'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),
'image_id': Text(shape=(), dtype=string),
'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),
'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
disparity_map |
Image |
(1024, 2048, 1) |
uint8 |
|
image_id |
Text |
|
string |
|
image_left |
Image |
(1024, 2048, 3) |
uint8 |
|
image_right |
Image |
(1024, 2048, 3) |
uint8 |
|
lost_and_found/full
FeaturesDict({
'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),
'image_id': Text(shape=(), dtype=string),
'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),
'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),
'instance_id': Image(shape=(1024, 2048, 1), dtype=uint8),
'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
disparity_map |
Image |
(1024, 2048, 1) |
uint8 |
|
image_id |
Text |
|
string |
|
image_left |
Image |
(1024, 2048, 3) |
uint8 |
|
image_right |
Image |
(1024, 2048, 3) |
uint8 |
|
instance_id |
Image |
(1024, 2048, 1) |
uint8 |
|
segmentation_label |
Image |
(1024, 2048, 1) |
uint8 |
|
lost_and_found/full_16bit
FeaturesDict({
'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),
'image_id': Text(shape=(), dtype=string),
'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),
'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),
'instance_id': Image(shape=(1024, 2048, 1), dtype=uint8),
'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
disparity_map |
Image |
(1024, 2048, 1) |
uint8 |
|
image_id |
Text |
|
string |
|
image_left |
Image |
(1024, 2048, 3) |
uint8 |
|
image_right |
Image |
(1024, 2048, 3) |
uint8 |
|
instance_id |
Image |
(1024, 2048, 1) |
uint8 |
|
segmentation_label |
Image |
(1024, 2048, 1) |
uint8 |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-06-01 UTC.
[null,null,["Last updated 2024-06-01 UTC."],[],[],null,["# lost_and_found\n\n\u003cbr /\u003e\n\n- **Description**:\n\nThe LostAndFound Dataset addresses the problem of detecting unexpected small\nobstacles on the road often caused by lost cargo. The dataset comprises 112\nstereo video sequences with 2104 annotated frames (picking roughly every tenth\nframe from the recorded data).\n\nThe dataset is designed analogous to the 'Cityscapes' dataset. The datset\nprovides: - stereo image pairs in either 8 or 16 bit color resolution -\nprecomputed disparity maps - coarse semantic labels for objects and street\n\nDescriptions of the labels are given here:\n\u003chttp://www.6d-vision.com/laf_table.pdf\u003e\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/lost-and-found)\n\n- **Homepage** :\n \u003chttp://www.6d-vision.com/lostandfounddataset\u003e\n\n- **Source code** :\n [`tfds.datasets.lost_and_found.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/lost_and_found/lost_and_found_dataset_builder.py)\n\n- **Versions**:\n\n - **`1.0.0`** (default): No release notes.\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 1,203 |\n| `'train'` | 1,036 |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Citation**:\n\n @inproceedings{pinggera2016lost,\n title={Lost and found: detecting small road hazards for self-driving vehicles},\n author={Pinggera, Peter and Ramos, Sebastian and Gehrig, Stefan and Franke, Uwe and Rother, Carsten and Mester, Rudolf},\n booktitle={2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n year={2016}\n }\n\nlost_and_found/semantic_segmentation (default config)\n-----------------------------------------------------\n\n- **Config description**: Lost and Found semantic segmentation dataset.\n\n- **Download size** : `5.44 GiB`\n\n- **Dataset size** : `5.42 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'image_id': Text(shape=(), dtype=string),\n 'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------|--------------|-----------------|--------|-------------|\n| | FeaturesDict | | | |\n| image_id | Text | | string | |\n| image_left | Image | (1024, 2048, 3) | uint8 | |\n| segmentation_label | Image | (1024, 2048, 1) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nlost_and_found/stereo_disparity\n-------------------------------\n\n- **Config description**: Lost and Found stereo images and disparity maps.\n\n- **Download size** : `12.16 GiB`\n\n- **Dataset size** : `12.22 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),\n 'image_id': Text(shape=(), dtype=string),\n 'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------------|--------------|-----------------|--------|-------------|\n| | FeaturesDict | | | |\n| disparity_map | Image | (1024, 2048, 1) | uint8 | |\n| image_id | Text | | string | |\n| image_left | Image | (1024, 2048, 3) | uint8 | |\n| image_right | Image | (1024, 2048, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nlost_and_found/full\n-------------------\n\n- **Config description**: Full Lost and Found dataset.\n\n- **Download size** : `12.19 GiB`\n\n- **Dataset size** : `12.25 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),\n 'image_id': Text(shape=(), dtype=string),\n 'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'instance_id': Image(shape=(1024, 2048, 1), dtype=uint8),\n 'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------|--------------|-----------------|--------|-------------|\n| | FeaturesDict | | | |\n| disparity_map | Image | (1024, 2048, 1) | uint8 | |\n| image_id | Text | | string | |\n| image_left | Image | (1024, 2048, 3) | uint8 | |\n| image_right | Image | (1024, 2048, 3) | uint8 | |\n| instance_id | Image | (1024, 2048, 1) | uint8 | |\n| segmentation_label | Image | (1024, 2048, 1) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nlost_and_found/full_16bit\n-------------------------\n\n- **Config description**: Full Lost and Found dataset.\n\n- **Download size** : `34.90 GiB`\n\n- **Dataset size** : `35.05 GiB`\n\n- **Feature structure**:\n\n FeaturesDict({\n 'disparity_map': Image(shape=(1024, 2048, 1), dtype=uint8),\n 'image_id': Text(shape=(), dtype=string),\n 'image_left': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'image_right': Image(shape=(1024, 2048, 3), dtype=uint8),\n 'instance_id': Image(shape=(1024, 2048, 1), dtype=uint8),\n 'segmentation_label': Image(shape=(1024, 2048, 1), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------|--------------|-----------------|--------|-------------|\n| | FeaturesDict | | | |\n| disparity_map | Image | (1024, 2048, 1) | uint8 | |\n| image_id | Text | | string | |\n| image_left | Image | (1024, 2048, 3) | uint8 | |\n| image_right | Image | (1024, 2048, 3) | uint8 | |\n| instance_id | Image | (1024, 2048, 1) | uint8 | |\n| segmentation_label | Image | (1024, 2048, 1) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]