tfp.experimental.sts_gibbs.one_step_predictive
Stay organized with collections
Save and categorize content based on your preferences.
Constructs a one-step-ahead predictive distribution at every timestep.
tfp.experimental.sts_gibbs.one_step_predictive(
model,
posterior_samples,
num_forecast_steps=0,
original_mean=0.0,
original_scale=1.0,
thin_every=1,
use_zero_step_prediction=False
)
Unlike the generic tfp.sts.one_step_predictive
, this method uses the
latent levels from Gibbs sampling to efficiently construct a predictive
distribution that mixes over posterior samples. The predictive distribution
may also include additional forecast steps.
This method returns the predictive distributions for each timestep given
previous timesteps and sampled model parameters, p(observed_time_series[t] |
observed_time_series[:t], weights, observation_noise_scale)
. Note that the
posterior values of the weights and noise scale will in general be informed
by observations from all timesteps including the step being predicted, so
this is not a strictly kosher probabilistic quantity, but in general we assume
that it's close, i.e., that the step being predicted had very small individual
impact on the overall parameter posterior.
Args |
model
|
A tfd.sts.StructuralTimeSeries model instance. This must be of the
form constructed by build_model_for_gibbs_sampling .
|
posterior_samples
|
A GibbsSamplerState instance in which each element is a
Tensor with initial dimension of size num_samples .
|
num_forecast_steps
|
Python int number of additional forecast steps to
append.
Default value: 0 .
|
original_mean
|
Optional scalar float Tensor , added to the predictive
distribution to undo the effect of input normalization.
Default value: 0.
|
original_scale
|
Optional scalar float Tensor , used to rescale the
predictive distribution to undo the effect of input normalization.
Default value: 1.
|
thin_every
|
Optional Python int factor by which to thin the posterior
samples, to reduce complexity of the predictive distribution. For example,
if thin_every=10 , every 10 th sample will be used.
Default value: 1 .
|
use_zero_step_prediction
|
If true, instead of using the local level
and trend from the timestep before, just use the local level from the
same timestep.
|
Returns |
predictive_dist
|
A tfd.MixtureSameFamily instance of event shape
[num_timesteps + num_forecast_steps] representing the predictive
distribution of each timestep given previous timesteps.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.sts_gibbs.one_step_predictive\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/sts_gibbs/gibbs_sampler.py#L587-L749) |\n\nConstructs a one-step-ahead predictive distribution at every timestep. \n\n tfp.experimental.sts_gibbs.one_step_predictive(\n model,\n posterior_samples,\n num_forecast_steps=0,\n original_mean=0.0,\n original_scale=1.0,\n thin_every=1,\n use_zero_step_prediction=False\n )\n\nUnlike the generic [`tfp.sts.one_step_predictive`](../../../tfp/sts/one_step_predictive), this method uses the\nlatent levels from Gibbs sampling to efficiently construct a predictive\ndistribution that mixes over posterior samples. The predictive distribution\nmay also include additional forecast steps.\n\nThis method returns the predictive distributions for each timestep given\nprevious timesteps and sampled model parameters, `p(observed_time_series[t] |\nobserved_time_series[:t], weights, observation_noise_scale)`. Note that the\nposterior values of the weights and noise scale will in general be informed\nby observations from all timesteps *including the step being predicted*, so\nthis is not a strictly kosher probabilistic quantity, but in general we assume\nthat it's close, i.e., that the step being predicted had very small individual\nimpact on the overall parameter posterior.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `model` | A `tfd.sts.StructuralTimeSeries` model instance. This must be of the form constructed by `build_model_for_gibbs_sampling`. |\n| `posterior_samples` | A `GibbsSamplerState` instance in which each element is a `Tensor` with initial dimension of size `num_samples`. |\n| `num_forecast_steps` | Python `int` number of additional forecast steps to append. Default value: `0`. |\n| `original_mean` | Optional scalar float `Tensor`, added to the predictive distribution to undo the effect of input normalization. Default value: `0.` |\n| `original_scale` | Optional scalar float `Tensor`, used to rescale the predictive distribution to undo the effect of input normalization. Default value: `1.` |\n| `thin_every` | Optional Python `int` factor by which to thin the posterior samples, to reduce complexity of the predictive distribution. For example, if `thin_every=10`, every `10`th sample will be used. Default value: `1`. |\n| `use_zero_step_prediction` | If true, instead of using the local level and trend from the timestep before, just use the local level from the same timestep. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `predictive_dist` | A `tfd.MixtureSameFamily` instance of event shape `[num_timesteps + num_forecast_steps]` representing the predictive distribution of each timestep given previous timesteps. |\n\n\u003cbr /\u003e"]]