the300w_lp
Stay organized with collections
Save and categorize content based on your preferences.
300W-LP Dataset is expanded from 300W, which standardises multiple alignment
databases with 68 landmarks, including AFW, LFPW, HELEN, IBUG and XM2VTS. With
300W, 300W-LP adopt the proposed face profiling to generate 61,225 samples
across large poses (1,786 from IBUG, 5,207 from AFW, 16,556 from LFPW and 37,676
from HELEN, XM2VTS is not used).
The dataset can be employed as the training set for the following computer
vision tasks: face attribute recognition and landmark (or facial part)
localization.
Split |
Examples |
'train' |
61,225 |
FeaturesDict({
'color_params': Tensor(shape=(7,), dtype=float32),
'exp_params': Tensor(shape=(29,), dtype=float32),
'illum_params': Tensor(shape=(10,), dtype=float32),
'image': Image(shape=(450, 450, 3), dtype=uint8),
'landmarks_2d': Tensor(shape=(68, 2), dtype=float32),
'landmarks_3d': Tensor(shape=(68, 2), dtype=float32),
'landmarks_origin': Tensor(shape=(68, 2), dtype=float32),
'pose_params': Tensor(shape=(7,), dtype=float32),
'roi': Tensor(shape=(4,), dtype=float32),
'shape_params': Tensor(shape=(199,), dtype=float32),
'tex_params': Tensor(shape=(199,), dtype=float32),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
color_params |
Tensor |
(7,) |
float32 |
|
exp_params |
Tensor |
(29,) |
float32 |
|
illum_params |
Tensor |
(10,) |
float32 |
|
image |
Image |
(450, 450, 3) |
uint8 |
|
landmarks_2d |
Tensor |
(68, 2) |
float32 |
|
landmarks_3d |
Tensor |
(68, 2) |
float32 |
|
landmarks_origin |
Tensor |
(68, 2) |
float32 |
|
pose_params |
Tensor |
(7,) |
float32 |
|
roi |
Tensor |
(4,) |
float32 |
|
shape_params |
Tensor |
(199,) |
float32 |
|
tex_params |
Tensor |
(199,) |
float32 |
|

@article{DBLP:journals/corr/ZhuLLSL15,
author = {Xiangyu Zhu and
Zhen Lei and
Xiaoming Liu and
Hailin Shi and
Stan Z. Li},
title = {Face Alignment Across Large Poses: {A} 3D Solution},
journal = {CoRR},
volume = {abs/1511.07212},
year = {2015},
url = {http://arxiv.org/abs/1511.07212},
archivePrefix = {arXiv},
eprint = {1511.07212},
timestamp = {Mon, 13 Aug 2018 16:48:23 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/ZhuLLSL15},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-11-23 UTC.
[null,null,["Last updated 2022-11-23 UTC."],[],[],null,["# the300w_lp\n\n\u003cbr /\u003e\n\n- **Description**:\n\n300W-LP Dataset is expanded from 300W, which standardises multiple alignment\ndatabases with 68 landmarks, including AFW, LFPW, HELEN, IBUG and XM2VTS. With\n300W, 300W-LP adopt the proposed face profiling to generate 61,225 samples\nacross large poses (1,786 from IBUG, 5,207 from AFW, 16,556 from LFPW and 37,676\nfrom HELEN, XM2VTS is not used).\n\nThe dataset can be employed as the training set for the following computer\nvision tasks: face attribute recognition and landmark (or facial part)\nlocalization.\n\n- **Homepage** :\n \u003chttp://www.cbsr.ia.ac.cn/users/xiangyuzhu/projects/3DDFA/main.htm\u003e\n\n- **Source code** :\n [`tfds.datasets.the300w_lp.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/the300w_lp/the300w_lp_dataset_builder.py)\n\n- **Versions**:\n\n - **`1.0.0`** (default): No release notes.\n- **Download size** : `2.63 GiB`\n\n- **Dataset size** : `1.33 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 61,225 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'color_params': Tensor(shape=(7,), dtype=float32),\n 'exp_params': Tensor(shape=(29,), dtype=float32),\n 'illum_params': Tensor(shape=(10,), dtype=float32),\n 'image': Image(shape=(450, 450, 3), dtype=uint8),\n 'landmarks_2d': Tensor(shape=(68, 2), dtype=float32),\n 'landmarks_3d': Tensor(shape=(68, 2), dtype=float32),\n 'landmarks_origin': Tensor(shape=(68, 2), dtype=float32),\n 'pose_params': Tensor(shape=(7,), dtype=float32),\n 'roi': Tensor(shape=(4,), dtype=float32),\n 'shape_params': Tensor(shape=(199,), dtype=float32),\n 'tex_params': Tensor(shape=(199,), dtype=float32),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|------------------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| color_params | Tensor | (7,) | float32 | |\n| exp_params | Tensor | (29,) | float32 | |\n| illum_params | Tensor | (10,) | float32 | |\n| image | Image | (450, 450, 3) | uint8 | |\n| landmarks_2d | Tensor | (68, 2) | float32 | |\n| landmarks_3d | Tensor | (68, 2) | float32 | |\n| landmarks_origin | Tensor | (68, 2) | float32 | |\n| pose_params | Tensor | (7,) | float32 | |\n| roi | Tensor | (4,) | float32 | |\n| shape_params | Tensor | (199,) | float32 | |\n| tex_params | Tensor | (199,) | float32 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{DBLP:journals/corr/ZhuLLSL15,\n author = {Xiangyu Zhu and\n Zhen Lei and\n Xiaoming Liu and\n Hailin Shi and\n Stan Z. Li},\n title = {Face Alignment Across Large Poses: {A} 3D Solution},\n journal = {CoRR},\n volume = {abs/1511.07212},\n year = {2015},\n url = {http://arxiv.org/abs/1511.07212},\n archivePrefix = {arXiv},\n eprint = {1511.07212},\n timestamp = {Mon, 13 Aug 2018 16:48:23 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/ZhuLLSL15},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n }"]]