View source on GitHub |
The TPUEmbedding mid level API running on CPU for serving.
tf.tpu.experimental.embedding.TPUEmbeddingForServing(
feature_config: Union[tf.tpu.experimental.embedding.FeatureConfig
, Iterable],
optimizer: Optional[tpu_embedding_v2_utils._Optimizer],
experimental_sparsecore_restore_info: Optional[Dict[str, Any]] = None
)
You can first train your model using the TPUEmbedding class and save the checkpoint. Then use this class to restore the checkpoint to do serving.
First train a model and save the checkpoint.
model = model_fn(...)
strategy = tf.distribute.TPUStrategy(...)
with strategy.scope():
embedding = tf.tpu.experimental.embedding.TPUEmbedding(
feature_config=feature_config,
optimizer=tf.tpu.experimental.embedding.SGD(0.1))
# Your custom training code.
checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)
checkpoint.save(...)
Then restore the checkpoint and do serving.
# Restore the model on CPU.
model = model_fn(...)
embedding = tf.tpu.experimental.embedding.TPUEmbeddingForServing(
feature_config=feature_config,
optimizer=tf.tpu.experimental.embedding.SGD(0.1))
checkpoint = tf.train.Checkpoint(model=model, embedding=embedding)
checkpoint.restore(...)
result = embedding(...)
table = embedding.embedding_table
Args | |
---|---|
feature_config
|
A nested structure of
tf.tpu.experimental.embedding.FeatureConfig configs.
|
optimizer
|
An instance of one of tf.tpu.experimental.embedding.SGD ,
tf.tpu.experimental.embedding.Adagrad or
tf.tpu.experimental.embedding.Adam . When not created under TPUStrategy
may be set to None to avoid the creation of the optimizer slot
variables, useful for optimizing memory consumption when exporting the
model for serving where slot variables aren't needed.
|
experimental_sparsecore_restore_info
|
Information from the sparse core
training, required to restore from checkpoint for serving (like number
of TPU devices used num_tpu_devices .)
|
Raises | |
---|---|
RuntimeError
|
If created under TPUStrategy. |
Attributes | |
---|---|
embedding_tables
|
Returns a dict of embedding tables, keyed by TableConfig .
|
Methods
build
build()
Create variables and slots variables for TPU embeddings.
embedding_lookup
embedding_lookup(
features: Any, weights: Optional[Any] = None
) -> Any
Apply standard lookup ops on CPU.
Args | |
---|---|
features
|
A nested structure of tf.Tensor s, tf.SparseTensor s or
tf.RaggedTensor s, with the same structure as feature_config . Inputs
will be downcast to tf.int32 . Only one type out of tf.SparseTensor
or tf.RaggedTensor is supported per call.
|
weights
|
If not None , a nested structure of tf.Tensor s,
tf.SparseTensor s or tf.RaggedTensor s, matching the above, except
that the tensors should be of float type (and they will be downcast to
tf.float32 ). For tf.SparseTensor s we assume the indices are the
same for the parallel entries from features and similarly for
tf.RaggedTensor s we assume the row_splits are the same.
|
Returns | |
---|---|
A nested structure of Tensors with the same structure as input features. |
__call__
__call__(
features: Any, weights: Optional[Any] = None
) -> Any
Call the mid level api to do embedding lookup.