View source on GitHub |
Constructs a one-step-ahead predictive distribution at every timestep.
tfp.experimental.sts_gibbs.one_step_predictive(
model,
posterior_samples,
num_forecast_steps=0,
original_mean=0.0,
original_scale=1.0,
thin_every=1,
use_zero_step_prediction=False
)
Unlike the generic tfp.sts.one_step_predictive
, this method uses the
latent levels from Gibbs sampling to efficiently construct a predictive
distribution that mixes over posterior samples. The predictive distribution
may also include additional forecast steps.
This method returns the predictive distributions for each timestep given
previous timesteps and sampled model parameters, p(observed_time_series[t] |
observed_time_series[:t], weights, observation_noise_scale)
. Note that the
posterior values of the weights and noise scale will in general be informed
by observations from all timesteps including the step being predicted, so
this is not a strictly kosher probabilistic quantity, but in general we assume
that it's close, i.e., that the step being predicted had very small individual
impact on the overall parameter posterior.
Returns | |
---|---|
predictive_dist
|
A tfd.MixtureSameFamily instance of event shape
[num_timesteps + num_forecast_steps] representing the predictive
distribution of each timestep given previous timesteps.
|