Sequential Monte Carlo transition kernel.
Inherits From: TransitionKernel
tfp.experimental.mcmc.SequentialMonteCarlo(
propose_and_update_log_weights_fn,
resample_fn=tfp.experimental.mcmc.resample_systematic,
resample_criterion_fn=tfp.experimental.mcmc.ess_below_threshold,
unbiased_gradients=True,
name=None
)
Sequential Monte Carlo maintains a population of weighted particles
representing samples from a sequence of target distributions. It is
not a calibrated MCMC kernel: the transitions step through a sequence of
target distributions, rather than trying to maintain a stationary
distribution.
Args |
propose_and_update_log_weights_fn
|
Python callable with signature
new_weighted_particles = propose_and_update_log_weights_fn(step,
weighted_particles, seed=None). Its input is a
tfp.experimental.mcmc.WeightedParticles structure representing
weighted samples (with normalized weights) from the stepth
target distribution, and it returns another such structure representing
unnormalized weighted samples from the next (step + 1th) target
distribution. This will typically include particles
sampled from a proposal distribution q(x[step + 1] | x[step]), and
weights that account for some or all of: the proposal density,
a transition density p(x[step + 1] | x[step]),
observation weightsp(y[step + 1] | x[step + 1]), and/or a backwards
or 'L'-kernelL(x[step] | x[step + 1]). The (log) normalization
constant of the weights is interpreted as the incremental (log) marginal
likelihood.
</td>
</tr><tr>
<td>resample_fn<a id="resample_fn"></a>
</td>
<td>
Resampling scheme specified as acallablewith signatureindices = resample_fn(log_probs, event_size, sample_shape, seed),
wherelog_probsis aTensorof the same shape asstate.log_weightscontaining a normalized log-probability for every current
particle,event_sizeis the number of new particle indices to
generate,sample_shapeis the number of independent index sets to
return, and the return valueindicesis anintTensor of shapeconcat([sample_shape, [event_size, B1, ..., BN]). Typically one of
<a href="../../../tfp/experimental/mcmc/resample_deterministic_minimum_error"><code>tfp.experimental.mcmc.resample_deterministic_minimum_error</code></a>,
<a href="../../../tfp/experimental/mcmc/resample_independent"><code>tfp.experimental.mcmc.resample_independent</code></a>,
<a href="../../../tfp/experimental/mcmc/resample_stratified"><code>tfp.experimental.mcmc.resample_stratified</code></a>, or
<a href="../../../tfp/experimental/mcmc/resample_systematic"><code>tfp.experimental.mcmc.resample_systematic</code></a>.
Default value: <a href="../../../tfp/experimental/mcmc/resample_systematic"><code>tfp.experimental.mcmc.resample_systematic</code></a>.
</td>
</tr><tr>
<td>resample_criterion_fn<a id="resample_criterion_fn"></a>
</td>
<td>
optional Pythoncallablewith signaturedo_resample = resample_criterion_fn(weighted_particles),
passed an instance of <a href="../../../tfp/experimental/mcmc/WeightedParticles"><code>tfp.experimental.mcmc.WeightedParticles</code></a>. The
return valuedo_resampledetermines whether particles are resampled at the current step. The
default behavior is to resample particles when the effective
sample size falls below half of the total number of particles.
Default value: <a href="../../../tfp/experimental/mcmc/ess_below_threshold"><code>tfp.experimental.mcmc.ess_below_threshold</code></a>.
</td>
</tr><tr>
<td>unbiased_gradients<a id="unbiased_gradients"></a>
</td>
<td>
IfTrue, use the stop-gradient
resampling trick of Scibior, Masrani, and Wood [{scibor_ref_idx}] to
correct for gradient bias introduced by the discrete resampling step.
This will generally increase the variance of stochastic gradients.
Default value:True.
</td>
</tr><tr>
<td>name<a id="name"></a>
</td>
<td>
Pythonstr` name for ops created by this kernel.
|
Attributes |
experimental_shard_axis_names
|
The shard axis names for members of the state.
|
is_calibrated
|
Returns True if Markov chain converges to specified distribution.
TransitionKernels which are "uncalibrated" are often calibrated by
composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.
|
name
|
|
propose_and_update_log_weights_fn
|
|
resample_criterion_fn
|
|
resample_fn
|
|
unbiased_gradients
|
|
Methods
bootstrap_results
View source
bootstrap_results(
init_state
)
Returns an object with the same type as returned by one_step(...)[1].
| Args |
init_state
|
Tensor or Python list of Tensors representing the
initial state(s) of the Markov chain(s).
|
| Returns |
kernel_results
|
A (possibly nested) tuple, namedtuple or list of
Tensors representing internal calculations made within this function.
|
copy
View source
copy(
**override_parameter_kwargs
)
Non-destructively creates a deep copy of the kernel.
| Args |
**override_parameter_kwargs
|
Python String/value dictionary of
initialization arguments to override with new values.
|
| Returns |
new_kernel
|
TransitionKernel object of same type as self,
initialized with the union of self.parameters and
override_parameter_kwargs, with any shared keys overridden by the
value of override_parameter_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs).
|
experimental_with_shard_axes
View source
experimental_with_shard_axes(
shard_axis_names
)
Returns a copy of the kernel with the provided shard axis names.
| Args |
shard_axis_names
|
a structure of strings indicating the shard axis names
for each component of this kernel's state.
|
| Returns |
|
A copy of the current kernel with the shard axis information.
|
one_step
View source
one_step(
state, kernel_results, seed=None
)
Takes one Sequential Monte Carlo inference step.