tfm.nlp.models.BertSpanLabeler

Span labeler model based on a BERT-style transformer-based encoder.

This is an implementation of the network structure surrounding a transformer encoder as described in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" (https://arxiv.org/abs/1810.04805).

The BertSpanLabeler allows a user to pass in a transformer encoder, and instantiates a span labeling network based on a single dense layer.

network A transformer network. This network should output a sequence output and a classification output. Furthermore, it should expose its embedding table via a get_embedding_table method.
initializer The initializer (if any) to use in the span labeling network. Defaults to a Glorot uniform initializer.
output The output style for this network. Can be either logit' or predictions.

checkpoint_items

Methods

call

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Args
inputs Input tensor, or dict/list/tuple of input tensors.
training Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.
mask A mask or list of masks. A mask can be either a boolean tensor or None (no mask). For more details, check the guide here.

Returns
A tensor if there is a single output, or a list of tensors if there are more than one outputs.