tfp.sts.build_factored_surrogate_posterior
Stay organized with collections
Save and categorize content based on your preferences.
Build a variational posterior that factors over model parameters.
tfp.sts.build_factored_surrogate_posterior(
model, batch_shape=(), seed=None, name=None
)
Used in the notebooks
The surrogate posterior consists of independent Normal distributions for
each parameter with trainable loc
and scale
, transformed using the
parameter's bijector
to the appropriate support space for that parameter.
Args |
model
|
An instance of StructuralTimeSeries representing a
time-series model. This represents a joint distribution over
time-series and their parameters with batch shape [b1, ..., bN] .
|
batch_shape
|
Batch shape (Python tuple , list , or int ) of initial
states to optimize in parallel.
Default value: () . (i.e., just run a single optimization).
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
|
name
|
Python str name prefixed to ops created by this function.
Default value: None (i.e., 'build_factored_surrogate_posterior').
|
Returns |
variational_posterior
|
tfd.JointDistributionNamed defining a trainable
surrogate posterior over model parameters. Samples from this
distribution are Python dict s with Python str parameter names as
keys.
|
Examples
Assume we've built a structural time-series model:
day_of_week = tfp.sts.Seasonal(
num_seasons=7,
observed_time_series=observed_time_series,
name='day_of_week')
local_linear_trend = tfp.sts.LocalLinearTrend(
observed_time_series=observed_time_series,
name='local_linear_trend')
model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],
observed_time_series=observed_time_series)
To fit the model to data, we define a surrogate posterior and fit it
by optimizing a variational bound:
surrogate_posterior = tfp.sts.build_factored_surrogate_posterior(
model=model)
loss_curve = tfp.vi.fit_surrogate_posterior(
target_log_prob_fn=model.joint_distribution(observed_time_series).log_prob,
surrogate_posterior=surrogate_posterior,
optimizer=tf.optimizers.Adam(learning_rate=0.1),
num_steps=200)
posterior_samples = surrogate_posterior.sample(50)
# In graph mode, we would need to write:
# with tf.control_dependencies([loss_curve]):
# posterior_samples = surrogate_posterior.sample(50)
For more control, we can also build and optimize a variational loss
manually:
@tf.function(autograph=False) # Ensure the loss is computed efficiently
def loss_fn():
return tfp.vi.monte_carlo_variational_loss(
model.joint_distribution(observed_time_series).log_prob,
surrogate_posterior,
sample_size=10)
optimizer = tf.optimizers.Adam(learning_rate=0.1)
for step in range(200):
with tf.GradientTape() as tape:
loss = loss_fn()
grads = tape.gradient(loss, surrogate_posterior.trainable_variables)
optimizer.apply_gradients(
zip(grads, surrogate_posterior.trainable_variables))
if step % 20 == 0:
print('step {} loss {}'.format(step, loss))
posterior_samples = surrogate_posterior.sample(50)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.sts.build_factored_surrogate_posterior\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/sts/fitting.py#L83-L182) |\n\nBuild a variational posterior that factors over model parameters. \n\n tfp.sts.build_factored_surrogate_posterior(\n model, batch_shape=(), seed=None, name=None\n )\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Structural Time Series Modeling Case Studies: Atmospheric CO2 and Electricity Demand](https://www.tensorflow.org/probability/examples/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand) |\n\nThe surrogate posterior consists of independent Normal distributions for\neach parameter with trainable `loc` and `scale`, transformed using the\nparameter's `bijector` to the appropriate support space for that parameter.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `model` | An instance of `StructuralTimeSeries` representing a time-series model. This represents a joint distribution over time-series and their parameters with batch shape `[b1, ..., bN]`. |\n| `batch_shape` | Batch shape (Python `tuple`, `list`, or `int`) of initial states to optimize in parallel. Default value: `()`. (i.e., just run a single optimization). |\n| `seed` | PRNG seed; see [`tfp.random.sanitize_seed`](../../tfp/random/sanitize_seed) for details. |\n| `name` | Python `str` name prefixed to ops created by this function. Default value: `None` (i.e., 'build_factored_surrogate_posterior'). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `variational_posterior` | `tfd.JointDistributionNamed` defining a trainable surrogate posterior over model parameters. Samples from this distribution are Python `dict`s with Python `str` parameter names as keys. |\n\n\u003cbr /\u003e\n\n### Examples\n\nAssume we've built a structural time-series model: \n\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\nTo fit the model to data, we define a surrogate posterior and fit it\nby optimizing a variational bound: \n\n surrogate_posterior = tfp.sts.build_factored_surrogate_posterior(\n model=model)\n loss_curve = tfp.vi.fit_surrogate_posterior(\n target_log_prob_fn=model.joint_distribution(observed_time_series).log_prob,\n surrogate_posterior=surrogate_posterior,\n optimizer=tf.optimizers.Adam(learning_rate=0.1),\n num_steps=200)\n posterior_samples = surrogate_posterior.sample(50)\n\n # In graph mode, we would need to write:\n # with tf.control_dependencies([loss_curve]):\n # posterior_samples = surrogate_posterior.sample(50)\n\nFor more control, we can also build and optimize a variational loss\nmanually: \n\n @tf.function(autograph=False) # Ensure the loss is computed efficiently\n def loss_fn():\n return tfp.vi.monte_carlo_variational_loss(\n model.joint_distribution(observed_time_series).log_prob,\n surrogate_posterior,\n sample_size=10)\n\n optimizer = tf.optimizers.Adam(learning_rate=0.1)\n for step in range(200):\n with tf.GradientTape() as tape:\n loss = loss_fn()\n grads = tape.gradient(loss, surrogate_posterior.trainable_variables)\n optimizer.apply_gradients(\n zip(grads, surrogate_posterior.trainable_variables))\n if step % 20 == 0:\n print('step {} loss {}'.format(step, loss))\n\n posterior_samples = surrogate_posterior.sample(50)"]]