ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

oryx.distributions.PoissonLogNormalQuadratureCompound

PoissonLogNormalQuadratureCompound distribution.

Inherits From: Distribution

The PoissonLogNormalQuadratureCompound is an approximation to a Poisson-LogNormal compound distribution, i.e.,

p(k|loc, scale)
= int_{R_+} dl LogNormal(l | loc, scale) Poisson(k | l)
approx= sum{ prob[d] Poisson(k | lambda(grid[d])) : d=0, ..., deg-1 }

By default, the grid is chosen as quantiles of the LogNormal distribution parameterized by loc, scale and the prob vector is [1. / quadrature_size]*quadrature_size.

In the non-approximation case, a draw from the LogNormal prior represents the Poisson rate parameter. Unfortunately, the non-approximate distribution lacks an analytical probability density function (pdf). Therefore the PoissonLogNormalQuadratureCompound class implements an approximation based on quadrature.

Mathematical Details

The PoissonLogNormalQuadratureCompound approximates a Poisson-LogNormal compound distribution. Using variable-substitution and numerical quadrature (default: based on LogNormal quantiles) we can redefine the distribution to be a parameter-less convex combination of deg different Poisson samples.

That is, defined over positive integers, this distribution is parameterized by a (batch of) loc and scale scalars.

The probability density function (pdf) is,

pdf(k | loc, scale, deg)
  = sum{ prob[d] Poisson(k | lambda=exp(grid[d]))
        : d=0, ..., deg-1 }

Examples

tfd = tfp.distributions

# Create two batches of PoissonLogNormalQuadratureCompounds, one with
# prior `loc = 0.` and another with `loc = 1.` In both cases `scale = 1.`
pln = tfd.PoissonLogNormalQuadratureCompound(
    loc=[0., -0.5],
    scale=1.,
    quadrature_size=10,
    validate_args=True)

<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr>

<tr>
<td>
`loc`
</td>
<td>
`float`-like (batch of) scalar `Tensor`; the location parameter of
the LogNormal prior.
</td>
</tr><tr>
<td>
`scale`
</td>
<td>
`float`-like (batch of) scalar `Tensor`; the scale parameter of
the LogNormal prior.
</td>
</tr><tr>
<td>
`quadrature_size`
</td>
<td>
Python `int` scalar representing the number of quadrature
points.
</td>
</tr><tr>
<td>
`quadrature_fn`
</td>
<td>
Python callable taking `loc`, `scale`,
`quadrature_size`, `validate_args` and returning `tuple(grid, probs)`
representing the LogNormal grid and corresponding normalized weight.
Default value: `quadrature_scheme_lognormal_quantiles`.
</td>
</tr><tr>
<td>
`validate_args`
</td>
<td>
Python `bool`, default `False`. When `True` distribution
parameters are checked for validity despite possibly degrading runtime
performance. When `False` invalid inputs may silently render incorrect
outputs.
</td>
</tr><tr>
<td>
`allow_nan_stats`
</td>
<td>
Python `bool`, default `True`. When `True`,
statistics (e.g., mean, mode, variance) use the value '`NaN`' to
indicate the result is undefined. When `False`, an exception is raised
if one or more of the statistic's batch members are undefined.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
Python `str` name prefixed to Ops created by this class.
</td>
</tr>
</table>



<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Raises</h2></th></tr>

<tr>
<td>
`TypeError`
</td>
<td>
if `quadrature_grid` and `quadrature_probs` have different base
`dtype`.
</td>
</tr>
</table>





<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr>

<tr>
<td>
`allow_nan_stats`
</td>
<td>
Python `bool` describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a
Cauchy distribution is infinity. However, sometimes the statistic is
undefined, e.g., if a distribution's pdf does not achieve a maximum within
the support of the distribution, the mode is undefined. If the mean is
undefined, then by definition the variance is undefined. E.g. the mean for
Student's T for df = 1 is undefined (no clear way to say it is either + or -
infinity), so the variance = E[(X - mean)**2] is also undefined.
</td>
</tr><tr>
<td>
`batch_shape`
</td>
<td>
Shape of a single sample from a single event index as a `TensorShape`.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical
parameterizations of this distribution.
</td>
</tr><tr>
<td>
`dtype`
</td>
<td>
The `DType` of `Tensor`s handled by this `Distribution`.
</td>
</tr><tr>
<td>
`event_shape`
</td>
<td>
Shape of a single sample from a single batch as a `TensorShape`.

May be partially defined or unknown.
</td>
</tr><tr>
<td>
`experimental_shard_axis_names`
</td>
<td>
The list or structure of lists of active shard axis names.
</td>
</tr><tr>
<td>
`loc`
</td>
<td>
Location parameter of the LogNormal prior.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
Name prepended to all ops created by this `Distribution`.
</td>
</tr><tr>
<td>
`parameters`
</td>
<td>
Dictionary of parameters used to instantiate this `Distribution`.
</td>
</tr><tr>
<td>
`quadrature_size`
</td>
<td>

</td>
</tr><tr>
<td>
`reparameterization_type`
</td>
<td>
Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances
`tfd.FULLY_REPARAMETERIZED` or `tfd.NOT_REPARAMETERIZED`.
</td>
</tr><tr>
<td>
`scale`
</td>
<td>
Scale parameter of the LogNormal prior.
</td>
</tr><tr>
<td>
`trainable_variables`
</td>
<td>

</td>
</tr><tr>
<td>
`validate_args`
</td>
<td>
Python `bool` indicating possibly expensive checks are enabled.
</td>
</tr><tr>
<td>
`variables`
</td>
<td>

</td>
</tr>
</table>



## Methods

<h3 id="batch_shape_tensor"><code>batch_shape_tensor</code></h3>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>batch_shape_tensor(
    name=&#x27;batch_shape_tensor&#x27;
)
</code></pre>

Shape of a single sample from a single event index as a 1-D `Tensor`.

The batch dimensions are indexes into independent, non-identical
parameterizations of this distribution.

<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>

<tr>
<td>
`name`
</td>
<td>
name to give to the op
</td>
</tr>
</table>



<!-- Tabular view -->
 <table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>

<tr>
<td>
`batch_shape`
</td>
<td>
`Tensor`.
</td>
</tr>
</table>



<h3 id="cdf"><code>cdf</code></h3>

<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>cdf(
    value, name=&#x27;cdf&#x27;, **kwargs
)
</code></pre>

Cumulative distribution function.

Given random variable `X`, the cumulative distribution function `cdf` is:

```none
cdf(x) := P[X <= x]

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
cdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

copy

Creates a deep copy of the distribution.

Args
**override_parameters_kwargs String/value dictionary of initialization arguments to override with new values.

Returns
distribution A new instance of type(self) initialized from the union of self.parameters and override_parameters_kwargs, i.e., dict(self.parameters, **override_parameters_kwargs).

covariance

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-k, vector-valued distribution, it is calculated as,

Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]

where Cov is a (batch of) k x k matrix, 0 <= (i, j) < k, and E denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), Covariance shall return a (batch of) matrices under some vectorization of the events, i.e.,

Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]

where Cov is a (batch of) k' x k' matrices, 0 <= (i, j) < k' = reduce_prod(event_shape), and Vec is some function mapping indices of this distribution's event dimensions to indices of a length-k' vector.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
covariance Floating-point Tensor with shape [B1, ..., Bn, k', k'] where the first n dimensions are batch coordinates and k' = reduce_prod(self.event_shape).

cross_entropy

Computes the (Shannon) cross entropy.

Denote this distribution (self) by P and the other distribution by Q. Assuming P, Q are absolutely continuous with respect to one another and permit densities p(x) dr(x) and q(x) dr(x), (Shannon) cross entropy is defined as:

H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x)

where F denotes the support of the random variable X ~ P.

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
cross_entropy self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of (Shannon) cross entropy.

entropy

Shannon entropy in nats.

event_shape_tensor

Shape of a single sample from a single batch as a 1-D int32 Tensor.

Args
name name to give to the op

Returns
event_shape Tensor.

experimental_default_event_space_bijector

Bijector mapping the reals (R**n) to the event space of the distribution.

Distributions with continuous support may implement _default_event_space_bijector which returns a subclass of tfp.bijectors.Bijector that maps R**n to the distribution's event space. For example, the default bijector for the Beta distribution is tfp.bijectors.Sigmoid(), which maps the real line to [0, 1], the support of the Beta distribution. The default bijector for the CholeskyLKJ distribution is tfp.bijectors.CorrelationCholesky, which maps R^(k * (k-1) // 2) to the submanifold of k x k lower triangular matrices with ones along the diagonal.

The purpose of experimental_default_event_space_bijector is to enable gradient descent in an unconstrained space for Variational Inference and Hamiltonian Monte Carlo methods. Some effort has been made to choose bijectors such that the tails of the distribution in the unconstrained space are between Gaussian and Exponential.

For distributions with discrete event space, or for which TFP currently lacks a suitable bijector, this function returns None.

Args
*args Passed to implementation _default_event_space_bijector.
**kwargs Passed to implementation _default_event_space_bijector.

Returns
event_space_bijector Bijector instance or None.

experimental_sample_and_log_prob

Samples from this distribution and returns the log density of the sample.

The default implementation simply calls sample and log_prob:

def _sample_and_log_prob(self, sample_shape, seed, **kwargs):
  x = self.sample(sample_shape=sample_shape, seed=seed, **kwargs)
  return x, self.log_prob(x, **kwargs)

However, some subclasses may provide more efficient and/or numerically stable implementations.

Args
sample_shape integer Tensor desired shape of samples to draw. Default value: ().
seed PRNG seed; see tfp.random.sanitize_seed for details. Default value: None.
name name to give to the op. Default value: 'sample_and_log_prob'.
**kwargs Named arguments forwarded to subclass implementation.

Returns
samples a Tensor, or structure of Tensors, with prepended dimensions sample_shape.
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

is_scalar_batch

Indicates that batch_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_batch bool scalar Tensor.

is_scalar_event

Indicates that event_shape == [].

Args
name Python str prepended to names of ops created by this function.

Returns
is_scalar_event bool scalar Tensor.

kl_divergence

Computes the Kullback--Leibler divergence.

Denote this distribution (self) by p and the other distribution by q. Assuming p, q are absolutely continuous with respect to reference measure r, the KL divergence is defined as:

KL[p, q] = E_p[log(p(X)/q(X))]
         = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x)
         = H[p, q] - H[p]

where F denotes the support of the random variable X ~ p, H[., .] denotes (Shannon) cross entropy, and H[.] denotes (Shannon) entropy.

Args
other tfp.distributions.Distribution instance.
name Python str prepended to names of ops created by this function.

Returns
kl_divergence self.dtype Tensor with shape [B1, ..., Bn] representing n different calculations of the Kullback-Leibler divergence.

log_cdf

Log cumulative distribution function.

Given random variable X, the cumulative distribution function cdf is:

log_cdf(x) := Log[ P[X <= x] ]

Often, a numerical approximation can be used for log_cdf(x) that yields a more accurate answer than simply taking the logarithm of the cdf when x << -1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
logcdf a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_prob

Log probability density/mass function.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

log_survival_function

Log survival function.

Given random variable X, the survival function is defined:

log_survival_function(x) = Log[ P[X > x] ]
                         = Log[ 1 - P[X <= x] ]
                         = Log[ 1 - cdf(x) ]

Typically, different numerical approximations can be used for the log survival function, which are more accurate than 1 - cdf(x) when x >> 1.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

mean

Mean.

mode

Mode.

param_shapes

Shapes of parameters given the desired shape of a call to sample().

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample().

Subclasses should override class method _param_shapes.

Args
sample_shape Tensor or python list/tuple. Desired shape of a call to sample().
name name to prepend ops with.

Returns
dict of parameter name to Tensor shapes.

param_static_shapes

param_shapes with static (i.e. TensorShape) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given Distribution so that a particular shape is returned for that instance's call to sample(). Assumes that the sample's shape is known statically.

Subclasses should override class method _param_shapes to return constant-valued tensors when constant values are fed.

Args
sample_shape TensorShape or python list/tuple. Desired shape of a call to sample().

Returns
dict of parameter name to TensorShape.

Raises
ValueError if sample_shape is a TensorShape and is not fully defined.

parameter_properties

Returns a dict mapping constructor arg names to property annotations.

This dict should include an entry for each of the distribution's Tensor-valued constructor arguments.

Distribution subclasses are not required to implement _parameter_properties, so this method may raise NotImplementedError. Providing a _parameter_properties implementation enables several advanced features, including:

  • Distribution batch slicing (sliced_distribution = distribution[i:j]).
  • Automatic inference of _batch_shape and _batch_shape_tensor, which must otherwise be computed explicitly.
  • Automatic instantiation of the distribution within TFP's internal property tests.
  • Automatic construction of 'trainable' instances of the distribution using appropriate bijectors to avoid violating parameter constraints. This enables the distribution family to be used easily as a surrogate posterior in variational inference.

In the future, parameter property annotations may enable additional functionality; for example, returning Distribution instances from tf.vectorized_map.

Args
dtype Optional float dtype to assume for continuous-valued parameters. Some constraining bijectors require advance knowledge of the dtype because certain constants (e.g., tfb.Softplus.low) must be instantiated with the same dtype as the values to be transformed.
num_classes Optional int Tensor number of classes to assume when inferring the shape of parameters for categorical-like distributions. Otherwise ignored.

Returns
parameter_properties A str ->tfp.python.internal.parameter_properties.ParameterPropertiesdict mapping constructor argument names toParameterProperties` instances.

Raises
NotImplementedError if the distribution class does not implement _parameter_properties.

poisson_and_mixture_distributions

Returns the Poisson and Mixture distribution parameterized by the quadrature grid and weights.

prob

Probability density/mass function.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

quantile

Quantile function. Aka 'inverse cdf' or 'percent point function'.

Given random variable X and p in [0, 1], the quantile is:

quantile(p) := x such that P[X <= x] == p

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
quantile a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

sample

Generate samples of the specified shape.

Note that a call to sample() without arguments will generate a single sample.

Args
sample_shape 0D or 1D int32 Tensor. Shape of the generated samples.
seed PRNG seed; see tfp.random.sanitize_seed for details.
name name to give to the op.
**kwargs Named arguments forwarded to subclass implementation.

Returns
samples a Tensor with prepended dimensions sample_shape.

stddev

Standard deviation.

Standard deviation is defined as,

stddev = E[(X - E[X])**2]**0.5

where X is the random variable associated with this distribution, E denotes expectation, and stddev.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
stddev Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

survival_function

Survival function.

Given random variable X, the survival function is defined:

survival_function(x) = P[X > x]
                     = 1 - P[X <= x]
                     = 1 - cdf(x).

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

unnormalized_log_prob

Potentially unnormalized log probability density/mass function.

This function is similar to log_prob, but does not require that the return value be normalized. (Normalization here refers to the total integral of probability being one, as it should be by definition for any probability distribution.) This is useful, for example, for distributions where the normalization constant is difficult or expensive to compute. By default, this simply calls log_prob.

Args
value float or double Tensor.
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
unnormalized_log_prob a Tensor of shape sample_shape(x) + self.batch_shape with values of type self.dtype.

variance

Variance.

Variance is defined as,

Var = E[(X - E[X])**2]

where X is the random variable associated with this distribution, E denotes expectation, and Var.shape = batch_shape + event_shape.

Args
name Python str prepended to names of ops created by this function.
**kwargs Named arguments forwarded to subclass implementation.

Returns
variance Floating-point Tensor with shape identical to batch_shape + event_shape, i.e., the same shape as self.mean().

__getitem__

Slices the batch axes of this distribution, returning a new instance.

b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9]))
b.batch_shape  # => [3, 5, 7, 9]
b2 = b[:, tf.newaxis, ..., -2:, 1::2]
b2.batch_shape  # => [3, 1, 5, 2, 4]

x = tf.random.stateless_normal([5, 3, 2, 2])
cov = tf.matmul(x, x, transpose_b=True)
chol = tf.linalg.cholesky(cov)
loc = tf.random.stateless_normal([4, 1, 3, 1])
mvn = tfd.MultivariateNormalTriL(loc, chol)
mvn.batch_shape  # => [4, 5, 3]
mvn.event_shape  # => [2]
mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis]
mvn2.batch_shape  # => [4, 2, 3, 1]
mvn2.event_shape  # => [2]

Args
slices slices from the [] operator

Returns
dist A new tfd.Distribution instance with sliced parameters.

__iter__