nyu_depth_v2
Stay organized with collections
Save and categorize content based on your preferences.
The NYU-Depth V2 data set is comprised of video sequences from a variety of
indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft
Kinect.
Split |
Examples |
'train' |
47,584 |
'validation' |
654 |
FeaturesDict({
'depth': Tensor(shape=(480, 640), dtype=float16),
'image': Image(shape=(480, 640, 3), dtype=uint8),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
depth |
Tensor |
(480, 640) |
float16 |
|
image |
Image |
(480, 640, 3) |
uint8 |
|

@inproceedings{Silberman:ECCV12,
author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
title = {Indoor Segmentation and Support Inference from RGBD Images},
booktitle = {ECCV},
year = {2012}
}
@inproceedings{icra_2019_fastdepth,
author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},
title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2019}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-11-23 UTC.
[null,null,["Last updated 2022-11-23 UTC."],[],[],null,["# nyu_depth_v2\n\n\u003cbr /\u003e\n\n- **Description**:\n\nThe NYU-Depth V2 data set is comprised of video sequences from a variety of\nindoor scenes as recorded by both the RGB and Depth cameras from the Microsoft\nKinect.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/nyuv2)\n\n- **Homepage** :\n [https://cs.nyu.edu/\\~silberman/datasets/nyu_depth_v2.html](https://cs.nyu.edu/%7Esilberman/datasets/nyu_depth_v2.html)\n\n- **Source code** :\n [`tfds.datasets.nyu_depth_v2.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)\n\n- **Versions**:\n\n - **`0.0.1`** (default): No release notes.\n- **Download size** : `31.92 GiB`\n\n- **Dataset size** : `74.03 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|----------|\n| `'train'` | 47,584 |\n| `'validation'` | 654 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'depth': Tensor(shape=(480, 640), dtype=float16),\n 'image': Image(shape=(480, 640, 3), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| depth | Tensor | (480, 640) | float16 | |\n| image | Image | (480, 640, 3) | uint8 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('image', 'depth')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{Silberman:ECCV12,\n author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},\n title = {Indoor Segmentation and Support Inference from RGBD Images},\n booktitle = {ECCV},\n year = {2012}\n }\n @inproceedings{icra_2019_fastdepth,\n author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},\n title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},\n booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},\n year = {2019}\n }"]]