wider_face
Stay organized with collections
Save and categorize content based on your preferences.
WIDER FACE dataset is a face detection benchmark dataset, of which images are
selected from the publicly available WIDER dataset. We choose 32,203 images and
label 393,703 faces with a high degree of variability in scale, pose and
occlusion as depicted in the sample images. WIDER FACE dataset is organized
based on 61 event classes. For each event class, we randomly select 40%/10%/50%
data as training, validation and testing sets. We adopt the same evaluation
metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,
we do not release bounding box ground truth for the test images. Users are
required to submit final prediction files, which we shall proceed to evaluate.
Split |
Examples |
'test' |
16,097 |
'train' |
12,880 |
'validation' |
3,226 |
FeaturesDict({
'faces': Sequence({
'bbox': BBoxFeature(shape=(4,), dtype=float32),
'blur': uint8,
'expression': bool,
'illumination': bool,
'invalid': bool,
'occlusion': uint8,
'pose': bool,
}),
'image': Image(shape=(None, None, 3), dtype=uint8),
'image/filename': Text(shape=(), dtype=string),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
faces |
Sequence |
|
|
|
faces/bbox |
BBoxFeature |
(4,) |
float32 |
|
faces/blur |
Tensor |
|
uint8 |
|
faces/expression |
Tensor |
|
bool |
|
faces/illumination |
Tensor |
|
bool |
|
faces/invalid |
Tensor |
|
bool |
|
faces/occlusion |
Tensor |
|
uint8 |
|
faces/pose |
Tensor |
|
bool |
|
image |
Image |
(None, None, 3) |
uint8 |
|
image/filename |
Text |
|
string |
|

@inproceedings{yang2016wider,
Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Title = {WIDER FACE: A Face Detection Benchmark},
Year = {2016} }
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-12-06 UTC.
[null,null,["Last updated 2022-12-06 UTC."],[],[],null,["# wider_face\n\n\u003cbr /\u003e\n\n- **Description**:\n\nWIDER FACE dataset is a face detection benchmark dataset, of which images are\nselected from the publicly available WIDER dataset. We choose 32,203 images and\nlabel 393,703 faces with a high degree of variability in scale, pose and\nocclusion as depicted in the sample images. WIDER FACE dataset is organized\nbased on 61 event classes. For each event class, we randomly select 40%/10%/50%\ndata as training, validation and testing sets. We adopt the same evaluation\nmetric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets,\nwe do not release bounding box ground truth for the test images. Users are\nrequired to submit final prediction files, which we shall proceed to evaluate.\n\n- **Homepage** :\n \u003chttp://shuoyang1213.me/WIDERFACE/\u003e\n\n- **Source code** :\n [`tfds.object_detection.WiderFace`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/object_detection/wider_face.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): No release notes.\n- **Download size** : `3.42 GiB`\n\n- **Dataset size** : `3.45 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|----------|\n| `'test'` | 16,097 |\n| `'train'` | 12,880 |\n| `'validation'` | 3,226 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'faces': Sequence({\n 'bbox': BBoxFeature(shape=(4,), dtype=float32),\n 'blur': uint8,\n 'expression': bool,\n 'illumination': bool,\n 'invalid': bool,\n 'occlusion': uint8,\n 'pose': bool,\n }),\n 'image': Image(shape=(None, None, 3), dtype=uint8),\n 'image/filename': Text(shape=(), dtype=string),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|--------------------|--------------|-----------------|---------|-------------|\n| | FeaturesDict | | | |\n| faces | Sequence | | | |\n| faces/bbox | BBoxFeature | (4,) | float32 | |\n| faces/blur | Tensor | | uint8 | |\n| faces/expression | Tensor | | bool | |\n| faces/illumination | Tensor | | bool | |\n| faces/invalid | Tensor | | bool | |\n| faces/occlusion | Tensor | | uint8 | |\n| faces/pose | Tensor | | bool | |\n| image | Image | (None, None, 3) | uint8 | |\n| image/filename | Text | | string | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{yang2016wider,\n Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},\n Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},\n Title = {WIDER FACE: A Face Detection Benchmark},\n Year = {2016} }"]]