![]() |
Creates a mask scoring layer.
tfm.vision.heads.MaskScoring(
num_classes: int,
fc_input_size: List[int],
num_convs: int = 3,
num_filters: int = 256,
use_depthwise_convolution: bool = False,
fc_dims: int = 1024,
num_fcs: int = 2,
activation: str = 'relu',
use_sync_bn: bool = False,
norm_momentum: float = 0.99,
norm_epsilon: float = 0.001,
kernel_regularizer: Optional[tf.keras.regularizers.Regularizer] = None,
bias_regularizer: Optional[tf.keras.regularizers.Regularizer] = None,
**kwargs
)
This implements mask scoring layer from the paper:
Zhaojin Huang, Lichao Huang, Yongchao Gong, Chang Huang, Xinggang Wang. Mask Scoring R-CNN. (https://arxiv.org/pdf/1903.00241.pdf)
Args | |
---|---|
num_classes
|
An int for number of classes.
|
fc_input_size
|
A List of int for the input size of the
fully connected layers.
|
num_convs
|
Anint for number of conv layers.
|
num_filters
|
An int for the number of filters for conv layers.
|
use_depthwise_convolution
|
A bool , whether or not using depthwise convs.
|
fc_dims
|
An int number of filters for each fully connected layers.
|
num_fcs
|
An int for number of fully connected layers.
|
activation
|
A str name of the activation function.
|
use_sync_bn
|
A bool, whether or not to use sync batch normalization. |
norm_momentum
|
A float for the momentum in BatchNorm. Defaults to 0.99. |
norm_epsilon
|
A float for the epsilon value in BatchNorm. Defaults to 0.001. |
kernel_regularizer
|
A tf.keras.regularizers.Regularizer object for
Conv2D. Default is None.
|
bias_regularizer
|
A tf.keras.regularizers.Regularizer object for Conv2D.
|
**kwargs
|
Additional keyword arguments to be passed. |
Methods
call
call(
inputs: tf.Tensor, training: bool = None
)
Forward pass mask scoring head.
Args | |
---|---|
inputs
|
A tf.Tensor of the shape [batch_size, width, size, num_classes],
representing the segmentation logits.
|
training
|
a bool indicating whether it is in training mode.
|
Returns | |
---|---|
mask_scores
|
A tf.Tensor of predicted mask scores
[batch_size, num_classes].
|