View source on GitHub |
Weighted cross-entropy loss for a sequence of logits.
tfa.seq2seq.SequenceLoss(
average_across_timesteps: bool = False,
average_across_batch: bool = False,
sum_over_timesteps: bool = True,
sum_over_batch: bool = True,
softmax_loss_function: Optional[Callable] = None,
name: Optional[str] = None
)
Args | |
---|---|
reduction
|
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO . AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to SUM_OVER_BATCH_SIZE . When used under a
tf.distribute.Strategy , except via Model.compile() and
Model.fit() , using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial
for more details.
|
name
|
Optional name for the instance. |
Methods
from_config
@classmethod
from_config( config )
Instantiates a Loss
from its config (output of get_config()
).
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss
instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
Override the parent call to have a customized reduce behavior.