tf.contrib.linear_optimizer.SDCAOptimizer
Stay organized with collections
Save and categorize content based on your preferences.
Wrapper class for SDCA optimizer.
tf.contrib.linear_optimizer.SDCAOptimizer(
example_id_column, num_loss_partitions=1, num_table_shards=None,
symmetric_l1_regularization=0.0, symmetric_l2_regularization=1.0, adaptive=True,
partitioner=None
)
The wrapper is currently meant for use as an optimizer within a tf.learn
Estimator.
Example usage:
real_feature_column = real_valued_column(...)
sparse_feature_column = sparse_column_with_hash_bucket(...)
sdca_optimizer = linear.SDCAOptimizer(example_id_column='example_id',
num_loss_partitions=1,
num_table_shards=1,
symmetric_l2_regularization=2.0)
classifier = tf.contrib.learn.LinearClassifier(
feature_columns=[real_feature_column, sparse_feature_column],
weight_column_name=...,
optimizer=sdca_optimizer)
classifier.fit(input_fn_train, steps=50)
classifier.evaluate(input_fn=input_fn_eval)
Here the expectation is that the input_fn_*
functions passed to train and
evaluate return a pair (dict, label_tensor) where dict has example_id_column
as key
whose value is a Tensor
of shape [batch_size] and dtype string.
num_loss_partitions defines the number of partitions of the global loss
function and should be set to (#concurrent train ops/per worker)
x (#workers)
.
Convergence of (global) loss is guaranteed if num_loss_partitions
is larger
or equal to the above product. Larger values for num_loss_partitions
lead to
slower convergence. The recommended value for num_loss_partitions
in
tf.learn
(where currently there is one process per worker) is the number
of workers running the train steps. It defaults to 1 (single machine).
num_table_shards
defines the number of shards for the internal state
table, typically set to match the number of parameter servers for large
data sets. You can also specify a partitioner
object to partition the primal
weights during training (div
partitioning strategy will be used).
Attributes |
adaptive
|
|
example_id_column
|
|
num_loss_partitions
|
|
num_table_shards
|
|
partitioner
|
|
symmetric_l1_regularization
|
|
symmetric_l2_regularization
|
|
Methods
get_name
View source
get_name()
get_train_step
View source
get_train_step(
columns_to_variables, weight_column_name, loss_type, features, targets,
global_step
)
Returns the training operation of an SdcaModel optimizer.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.linear_optimizer.SDCAOptimizer\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/sdca_optimizer.py#L31-L278) |\n\nWrapper class for SDCA optimizer. \n\n tf.contrib.linear_optimizer.SDCAOptimizer(\n example_id_column, num_loss_partitions=1, num_table_shards=None,\n symmetric_l1_regularization=0.0, symmetric_l2_regularization=1.0, adaptive=True,\n partitioner=None\n )\n\nThe wrapper is currently meant for use as an optimizer within a tf.learn\nEstimator.\n\n#### Example usage:\n\n real_feature_column = real_valued_column(...)\n sparse_feature_column = sparse_column_with_hash_bucket(...)\n sdca_optimizer = linear.SDCAOptimizer(example_id_column='example_id',\n num_loss_partitions=1,\n num_table_shards=1,\n symmetric_l2_regularization=2.0)\n classifier = tf.contrib.learn.LinearClassifier(\n feature_columns=[real_feature_column, sparse_feature_column],\n weight_column_name=...,\n optimizer=sdca_optimizer)\n classifier.fit(input_fn_train, steps=50)\n classifier.evaluate(input_fn=input_fn_eval)\n\nHere the expectation is that the `input_fn_*` functions passed to train and\nevaluate return a pair (dict, label_tensor) where dict has `example_id_column`\nas `key` whose value is a `Tensor` of shape \\[batch_size\\] and dtype string.\nnum_loss_partitions defines the number of partitions of the global loss\nfunction and should be set to `(#concurrent train ops/per worker)\nx (#workers)`.\nConvergence of (global) loss is guaranteed if `num_loss_partitions` is larger\nor equal to the above product. Larger values for `num_loss_partitions` lead to\nslower convergence. The recommended value for `num_loss_partitions` in\n`tf.learn` (where currently there is one process per worker) is the number\nof workers running the train steps. It defaults to 1 (single machine).\n`num_table_shards` defines the number of shards for the internal state\ntable, typically set to match the number of parameter servers for large\ndata sets. You can also specify a `partitioner` object to partition the primal\nweights during training (`div` partitioning strategy will be used).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------------|---------------|\n| `adaptive` | \u003cbr /\u003e \u003cbr /\u003e |\n| `example_id_column` | \u003cbr /\u003e \u003cbr /\u003e |\n| `num_loss_partitions` | \u003cbr /\u003e \u003cbr /\u003e |\n| `num_table_shards` | \u003cbr /\u003e \u003cbr /\u003e |\n| `partitioner` | \u003cbr /\u003e \u003cbr /\u003e |\n| `symmetric_l1_regularization` | \u003cbr /\u003e \u003cbr /\u003e |\n| `symmetric_l2_regularization` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `get_name`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/sdca_optimizer.py#L87-L88) \n\n get_name()\n\n### `get_train_step`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/sdca_optimizer.py#L118-L278) \n\n get_train_step(\n columns_to_variables, weight_column_name, loss_type, features, targets,\n global_step\n )\n\nReturns the training operation of an SdcaModel optimizer."]]