ucf101
Stay organized with collections
Save and categorize content based on your preferences.
A 101-label video classification dataset.
@article{DBLP:journals/corr/abs-1212-0402,
author = {Khurram Soomro and
Amir Roshan Zamir and
Mubarak Shah},
title = { {UCF101:} {A} Dataset of 101 Human Actions Classes From Videos in
The Wild},
journal = {CoRR},
volume = {abs/1212.0402},
year = {2012},
url = {http://arxiv.org/abs/1212.0402},
archivePrefix = {arXiv},
eprint = {1212.0402},
timestamp = {Mon, 13 Aug 2018 16:47:45 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1212-0402},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
ucf101/ucf101_1_256 (default config)
Split |
Examples |
'test' |
3,783 |
'train' |
9,537 |
FeaturesDict({
'label': ClassLabel(shape=(), dtype=int64, num_classes=101),
'video': Video(Image(shape=(256, 256, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
label |
ClassLabel |
|
int64 |
|
video |
Video(Image) |
(None, 256, 256, 3) |
uint8 |
|
ucf101/ucf101_1
Split |
Examples |
'test' |
3,783 |
'train' |
9,537 |
FeaturesDict({
'label': ClassLabel(shape=(), dtype=int64, num_classes=101),
'video': Video(Image(shape=(None, None, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
label |
ClassLabel |
|
int64 |
|
video |
Video(Image) |
(None, None, None, 3) |
uint8 |
|
ucf101/ucf101_2
Split |
Examples |
'test' |
3,734 |
'train' |
9,586 |
FeaturesDict({
'label': ClassLabel(shape=(), dtype=int64, num_classes=101),
'video': Video(Image(shape=(None, None, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
label |
ClassLabel |
|
int64 |
|
video |
Video(Image) |
(None, None, None, 3) |
uint8 |
|
ucf101/ucf101_3
Split |
Examples |
'test' |
3,696 |
'train' |
9,624 |
FeaturesDict({
'label': ClassLabel(shape=(), dtype=int64, num_classes=101),
'video': Video(Image(shape=(None, None, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
label |
ClassLabel |
|
int64 |
|
video |
Video(Image) |
(None, None, None, 3) |
uint8 |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-11-23 UTC.
[null,null,["Last updated 2022-11-23 UTC."],[],[],null,["# ucf101\n\n\u003cbr /\u003e\n\n- **Description**:\n\nA 101-label video classification dataset.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/ucf101)\n\n- **Homepage** :\n \u003chttps://www.crcv.ucf.edu/data-sets/ucf101/\u003e\n\n- **Source code** :\n [`tfds.video.Ucf101`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/video/ucf101.py)\n\n- **Versions**:\n\n - **`2.0.0`** (default): New split API (\u003chttps://tensorflow.org/datasets/splits\u003e)\n- **Download size** : `6.48 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Citation**:\n\n @article{DBLP:journals/corr/abs-1212-0402,\n author = {Khurram Soomro and\n Amir Roshan Zamir and\n Mubarak Shah},\n title = { {UCF101:} {A} Dataset of 101 Human Actions Classes From Videos in\n The Wild},\n journal = {CoRR},\n volume = {abs/1212.0402},\n year = {2012},\n url = {http://arxiv.org/abs/1212.0402},\n archivePrefix = {arXiv},\n eprint = {1212.0402},\n timestamp = {Mon, 13 Aug 2018 16:47:45 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1212-0402},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n }\n\nucf101/ucf101_1_256 (default config)\n------------------------------------\n\n- **Config description**: 256x256 UCF with the first action recognition split.\n\n- **Dataset size** : `7.40 GiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 3,783 |\n| `'train'` | 9,537 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=101),\n 'video': Video(Image(shape=(256, 256, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|---------------------|-------|-------------|\n| | FeaturesDict | | | |\n| label | ClassLabel | | int64 | |\n| video | Video(Image) | (None, 256, 256, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nucf101/ucf101_1\n---------------\n\n- **Config description**: UCF with the action recognition split #1.\n\n- **Dataset size** : `8.48 GiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 3,783 |\n| `'train'` | 9,537 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=101),\n 'video': Video(Image(shape=(None, None, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|-----------------------|-------|-------------|\n| | FeaturesDict | | | |\n| label | ClassLabel | | int64 | |\n| video | Video(Image) | (None, None, None, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nucf101/ucf101_2\n---------------\n\n- **Config description**: UCF with the action recognition split #2.\n\n- **Dataset size** : `8.48 GiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 3,734 |\n| `'train'` | 9,586 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=101),\n 'video': Video(Image(shape=(None, None, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|-----------------------|-------|-------------|\n| | FeaturesDict | | | |\n| label | ClassLabel | | int64 | |\n| video | Video(Image) | (None, None, None, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nucf101/ucf101_3\n---------------\n\n- **Config description**: UCF with the action recognition split #3.\n\n- **Dataset size** : `8.48 GiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 3,696 |\n| `'train'` | 9,624 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'label': ClassLabel(shape=(), dtype=int64, num_classes=101),\n 'video': Video(Image(shape=(None, None, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|-----------------------|-------|-------------|\n| | FeaturesDict | | | |\n| label | ClassLabel | | int64 | |\n| video | Video(Image) | (None, None, None, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]