|View source on GitHub|
Add QAT support for Keras SeparableConv1D layer.
tfmot.quantization.keras.experimental.default_n_bit.default_n_bit_transforms.SeparableConv1DQuantize( num_bits_weight: int = 8, num_bits_activation: int = 8 )
Transforms SeparableConv1D into a SeparableConv2D invocation. The Keras SeparableConv1D layer internally uses the same code as a SeparbaleConv2D layer. It simple expands and squeezes the tensor dimensions before and after the convolutions. Applying this transform ensures the QAT handling for SeparableConv2D kicks in and handles the FQ placement properly.
Input -> SeparableConv1D -> Output to Input -> Lambda(ExpandDims) -> SeparableConv2D -> Lambda(Squeeze) -> Output
Unlike SeparableConv2DQuantize, this does not break the layer into DepthwiseConv and Conv separately, since no DepthwiseConv1D exists.
Dictionary of custom objects introduced by the
Transform may introduce custom Classes and types unknown to Keras. This
function should return a dictionary containing these objects in case such
types are introduced. It allows model construction to serialize/deserialize
|Custom objects introduced by the transform as a dictionary.|
LayerPattern to find in the model graph.
replacement( match_layer )
Generate a replacement sub-graph for the matched sub-graph.
The fundamental constraint of the replacement is that the replacement sub-graph should consume the same input tensors as the original sub-graph and also produce a final list of tensors which are same in number and shape as the original sub-graph. Not following this could crash model creation, or introduce bugs in the new model graph.
sub-graph, and output layers feeding from the tip of the tree as parameters. These would be needed for complex replace cases.
Matched sub-graph based on