DataLoader for image classifier.
tflite_model_maker.image_classifier.DataLoader(
dataset, size, index_to_label
)
Used in the notebooks
Args |
dataset
|
A tf.data.Dataset object that contains a potentially large set of
elements, where each element is a pair of (input_data, target). The
input_data means the raw input data, like an image, a text etc., while
the target means some ground truth of the raw input data, such as the
classification label of the image etc.
|
size
|
The size of the dataset. tf.data.Dataset donesn't support a function
to get the length directly since it's lazy-loaded and may be infinite.
|
Attributes |
num_classes
|
|
size
|
Returns the size of the dataset.
Note that this function may return None becuase the exact size of the
dataset isn't a necessary parameter to create an instance of this class,
and tf.data.Dataset donesn't support a function to get the length directly
since it's lazy-loaded and may be infinite.
In most cases, however, when an instance of this class is created by helper
functions like 'from_folder', the size of the dataset will be preprocessed,
and this function can return an int representing the size of the dataset.
|
Methods
from_folder
View source
@classmethod
from_folder(
filename, shuffle=True
)
Image analysis for image classification load images with labels.
Assume the image data of the same label are in the same subdirectory.
Args |
filename
|
Name of the file.
|
shuffle
|
boolean, if shuffle, random shuffle data.
|
Returns |
ImageDataset containing images and labels and other related info.
|
from_tfds
View source
@classmethod
from_tfds(
name
)
Loads data from tensorflow_datasets.
gen_dataset
View source
gen_dataset(
batch_size=1,
is_training=False,
shuffle=False,
input_pipeline_context=None,
preprocess=None,
drop_remainder=False
)
Generate a shared and batched tf.data.Dataset for training/evaluation.
Args |
batch_size
|
A integer, the returned dataset will be batched by this size.
|
is_training
|
A boolean, when True, the returned dataset will be optionally
shuffled and repeated as an endless dataset.
|
shuffle
|
A boolean, when True, the returned dataset will be shuffled to
create randomness during model training.
|
input_pipeline_context
|
A InputContext instance, used to shared dataset
among multiple workers when distribution strategy is used.
|
preprocess
|
A function taking three arguments in order, feature, label and
boolean is_training.
|
drop_remainder
|
boolean, whether the finaly batch drops remainder.
|
Returns |
A TF dataset ready to be consumed by Keras model.
|
split
View source
split(
fraction
)
Splits dataset into two sub-datasets with the given fraction.
Primarily used for splitting the data set into training and testing sets.
Args |
fraction
|
float, demonstrates the fraction of the first returned
subdataset in the original data.
|
Returns |
The splitted two sub datasets.
|
__len__
View source
__len__()