![]() |
Runs one step of the No U-Turn Sampler.
Inherits From: TransitionKernel
tfp.experimental.mcmc.NoUTurnSampler(
target_log_prob_fn, step_size, max_tree_depth=10, unrolled_leapfrog_steps=1,
num_trajectories_per_step=1, use_auto_batching=True, stackless=False,
backend=None, seed=None, name=None
)
The No U-Turn Sampler (NUTS) is an adaptive variant of the Hamiltonian Monte
Carlo (HMC) method for MCMC. NUTS adapts the distance traveled in response to
the curvature of the target density. Conceptually, one proposal consists of
reversibly evolving a trajectory through the sample space, continuing until
that trajectory turns back on itself (hence the name, "No U-Turn"). This
class implements one random NUTS step from a given
current_state
. Mathematical details and derivations can be found in
[Hoffman, Gelman (2011)][1].
The one_step
function can update multiple chains in parallel. It assumes
that a prefix of leftmost dimensions of current_state
index independent
chain states (and are therefore updated independently). The output of
target_log_prob_fn(*current_state)
should sum log-probabilities across all
event dimensions. Slices along the rightmost dimensions may have different
target distributions; for example, current_state[0][0, ...]
could have a
different target distribution from current_state[0][1, ...]
. These
semantics are governed by target_log_prob_fn(*current_state)
. (The number of
independent chains is tf.size(target_log_prob_fn(*current_state))
.)
pick sensible step sizes, or implement step size adaptation, or both.
References
[1] Matthew D. Hoffman, Andrew Gelman. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. 2011. https://arxiv.org/pdf/1111.4246.pdf
Args | |
---|---|
target_log_prob_fn
|
Python callable which takes an argument like
current_state (or *current_state if it's a list) and returns its
(possibly unnormalized) log-density under the target distribution. Due
to limitations of the underlying auto-batching system,
target_log_prob_fn may be invoked with junk data at some batch indexes,
which it must process without crashing. (The results at those indexes
are ignored).
|
step_size
|
Tensor or Python list of Tensor s representing the step
size for the leapfrog integrator. Must broadcast with the shape of
current_state . Larger step sizes lead to faster progress, but
too-large step sizes make rejection exponentially more likely. When
possible, it's often helpful to match per-variable step sizes to the
standard deviations of the target distribution in each variable.
|
max_tree_depth
|
Maximum depth of the tree implicitly built by NUTS. The
maximum number of leapfrog steps is bounded by 2**max_tree_depth-1
i.e. the number of nodes in a binary tree max_tree_depth nodes deep.
The default setting of 10 takes up to 1023 leapfrog steps.
|
unrolled_leapfrog_steps
|
The number of leapfrogs to unroll per tree expansion step. Applies a direct linear multipler to the maximum trajectory length implied by max_tree_depth. Defaults to 1. This parameter can be useful for amortizing the auto-batching control flow overhead. |
num_trajectories_per_step
|
Python int giving the number of NUTS
trajectories to run as "one" step. Setting this higher than 1 may be
favorable for performance by giving the autobatching system the
opportunity to batch gradients across consecutive trajectories. The
intermediate samples are thinned: only the last sample from the run (in
each batch member) is returned.
|
use_auto_batching
|
Boolean. If False , do not invoke the auto-batching
system; operate on batch size 1 only.
|
stackless
|
Boolean. If True , invoke the stackless version of
the auto-batching system. Only works in Eager mode.
|
backend
|
Auto-batching backend object. Falls back to a default TensorFlowBackend(). |
seed
|
Python integer to seed the random number generator. |
name
|
Python str name prefixed to Ops created by this function.
Default value: None (i.e., 'nuts_kernel').
|
Attributes | |
---|---|
is_calibrated
|
Returns True if Markov chain converges to specified distribution.
|
parameters
|
Methods
bootstrap_results
bootstrap_results(
init_state
)
Creates initial previous_kernel_results
using a supplied state
.
copy
copy(
**override_parameter_kwargs
)
Non-destructively creates a deep copy of the kernel.
Args | |
---|---|
**override_parameter_kwargs
|
Python String/value dictionary of
initialization arguments to override with new values.
|
Returns | |
---|---|
new_kernel
|
TransitionKernel object of same type as self ,
initialized with the union of self.parameters and
override_parameter_kwargs, with any shared keys overridden by the
value of override_parameter_kwargs, i.e.,
dict(self.parameters, **override_parameters_kwargs) .
|
one_step
one_step(
current_state, previous_kernel_results
)
Runs one iteration of the No U-Turn Sampler.
Args | |
---|---|
current_state
|
Tensor or Python list of Tensor s representing the
current state(s) of the Markov chain(s). The first r dimensions index
independent chains, r = tf.rank(target_log_prob_fn(*current_state)) .
|
previous_kernel_results
|
collections.namedtuple containing Tensor s
representing values from previous calls to this function (or from the
bootstrap_results function.)
|
Returns | |
---|---|
next_state
|
Tensor or Python list of Tensor s representing the state(s)
of the Markov chain(s) after taking self.num_trajectories_per_step
steps. Has same type and shape as current_state .
|
kernel_results
|
collections.namedtuple of internal calculations used to
advance the chain.
|