Stay organized with collections
Save and categorize content based on your preferences.
Description:
ImageNet-PI is a relabelled version of the standard ILSVRC2012 ImageNet dataset
in which the labels are provided by a collection of 16 deep neural networks with
different architectures pre-trained on the standard ILSVRC2012. Specifically,
the pre-trained models are downloaded from tf.keras.applications.
In addition to the new labels, ImageNet-PI also provides meta-data about the
annotation process in the form of confidences of the models on their labels and
additional information about each model.
Manual download instructions: This dataset requires you to
download the source data manually into download_config.manual_dir
(defaults to ~/tensorflow_datasets/downloads/manual/):
manual_dir should contain two files: ILSVRC2012_img_train.tar and
ILSVRC2012_img_val.tar.
You need to register on http://www.image-net.org/download-images in order
to get the link to download the dataset.
[null,null,["Last updated 2023-04-06 UTC."],[],[],null,["# imagenet_pi\n\n\u003cbr /\u003e\n\n| **Warning:** Manual download required. See instructions below.\n\n- **Description**:\n\nImageNet-PI is a relabelled version of the standard ILSVRC2012 ImageNet dataset\nin which the labels are provided by a collection of 16 deep neural networks with\ndifferent architectures pre-trained on the standard ILSVRC2012. Specifically,\nthe pre-trained models are downloaded from tf.keras.applications.\n\nIn addition to the new labels, ImageNet-PI also provides meta-data about the\nannotation process in the form of confidences of the models on their labels and\nadditional information about each model.\n\nFor more information see:\n[ImageNet-PI](https://github.com/google-research-datasets/imagenet_pi)\n\n- **Homepage** :\n \u003chttps://github.com/google-research-datasets/imagenet_pi/\u003e\n\n- **Source code** :\n [`tfds.datasets.imagenet_pi.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/imagenet_pi/imagenet_pi_dataset_builder.py)\n\n- **Versions**:\n\n - **`1.0.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `Unknown size`\n\n- **Manual download instructions** : This dataset requires you to\n download the source data manually into `download_config.manual_dir`\n (defaults to `~/tensorflow_datasets/downloads/manual/`): \n\n manual_dir should contain two files: ILSVRC2012_img_train.tar and\n ILSVRC2012_img_val.tar.\n You need to register on \u003chttp://www.image-net.org/download-images\u003e in order\n to get the link to download the dataset.\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n Unknown\n\n- **Splits**:\n\n| Split | Examples |\n|-------|----------|\n\n- **Feature structure**:\n\n FeaturesDict({\n 'annotator_confidences': Tensor(shape=(16,), dtype=float32),\n 'annotator_labels': Tensor(shape=(16,), dtype=int64),\n 'clean_label': ClassLabel(shape=(), dtype=int64, num_classes=1000),\n 'file_name': Text(shape=(), dtype=string),\n 'image': Image(shape=(None, None, 3), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------------------|--------------|-----------------|---------|-------------|\n| | FeaturesDict | | | |\n| annotator_confidences | Tensor | (16,) | float32 | |\n| annotator_labels | Tensor | (16,) | int64 | |\n| clean_label | ClassLabel | | int64 | |\n| file_name | Text | | string | |\n| image | Image | (None, None, 3) | uint8 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('image', 'annotator_labels')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n Missing.\n\n- **Citation**:\n\n @inproceedings{tram,\n author = {Mark Collier and\n Rodolphe Jenatton and\n Effrosyni Kokiopoulou and\n Jesse Berent},\n editor = {Kamalika Chaudhuri and\n Stefanie Jegelka and\n Le Song and\n Csaba Szepesv{\\'{a} }ri and\n Gang Niu and\n Sivan Sabato},\n title = {Transfer and Marginalize: Explaining Away Label Noise with Privileged\n Information},\n booktitle = {International Conference on Machine Learning, {ICML} 2022, 17-23 July\n 2022, Baltimore, Maryland, {USA} },\n series = {Proceedings of Machine Learning Research},\n volume = {162},\n pages = {4219--4237},\n publisher = { {PMLR} },\n year = {2022},\n url = {https://proceedings.mlr.press/v162/collier22a.html},\n }\n @article{ILSVRC15,\n Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},\n Title = { {ImageNet Large Scale Visual Recognition Challenge} },\n Year = {2015},\n journal = {International Journal of Computer Vision (IJCV)},\n doi = {10.1007/s11263-015-0816-y},\n volume={115},\n number={3},\n pages={211-252}\n }"]]