![]() |
Model good at detecting environmental sounds, using YAMNet embedding.
tflite_model_maker.audio_classifier.YamNetSpec(
model_dir: None = None,
strategy: None = None,
yamnet_model_handle='https://hub.tensorflow.google.cn/google/yamnet/1',
frame_length=EXPECTED_WAVEFORM_LENGTH,
frame_step=(EXPECTED_WAVEFORM_LENGTH // 2),
keep_yamnet_and_custom_heads=True
)
Used in the notebooks
Used in the tutorials |
---|
Attributes | |
---|---|
target_sample_rate
|
Methods
create_model
create_model(
num_classes, train_whole_model=False
)
create_serving_model
create_serving_model(
training_model
)
Create a model for serving.
export_tflite
export_tflite(
model,
tflite_filepath,
with_metadata=True,
export_metadata_json_file=True,
index_to_label=None,
quantization_config=None
)
Converts the retrained model to tflite format and saves it.
This method overrides the default CustomModel._export_tflite
method, and
include the spectrom extraction in the model.
The exported model has input shape (1, number of wav samples)
Args | |
---|---|
model
|
An instance of the keras classification model to be exported. |
tflite_filepath
|
File path to save tflite model. |
with_metadata
|
Whether the output tflite model contains metadata. |
export_metadata_json_file
|
Whether to export metadata in json file. If
True, export the metadata in the same directory as tflite model. Used
only if with_metadata is True.
|
index_to_label
|
A list that map from index to label class name. |
quantization_config
|
Configuration for post-training quantization. |
get_default_quantization_config
get_default_quantization_config()
Gets the default quantization configuration.
preprocess_ds
preprocess_ds(
ds, is_training=False, cache_fn=None
)
Returns a preprocessed dataset.
run_classifier
run_classifier(
model, epochs, train_ds, validation_ds, **kwargs
)
Class Variables | |
---|---|
EMBEDDING_SIZE |
1024
|
EXPECTED_WAVEFORM_LENGTH |
15600
|