tfp.distributions.Weibull

<!-- Stable --> <table class="tfo-notebook-buttons tfo-api nocontent" align="left"> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/weibull.py#L37-L219"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub </a> </td> </table> The Weibull distribution with 'concentration' and `scale` parameters. Inherits From: [`TransformedDistribution`](../../tfp/distributions/TransformedDistribution), [`Distribution`](../../tfp/distributions/Distribution) <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>tfp.distributions.Weibull( concentration, scale, validate_args=False, allow_nan_stats=True, name='Weibull' ) </code></pre> <!-- Placeholder for "Used in" --> #### Mathematical details The probability density function (pdf) of this distribution is, ```none pdf(x; lambda, k) = k / lambda * (x / lambda) ** (k - 1) * exp(-(x / lambda) ** k) ``` where `concentration = k` and `scale = lambda`. The cumulative density function of this distribution is, ```cdf(x; lambda, k) = 1 - exp(-(x / lambda) ** k)``` The Weibull distribution includes the Exponential and Rayleigh distributions as special cases: ```Exponential(rate) = Weibull(concentration=1., 1. / rate)``` ```Rayleigh(scale) = Weibull(concentration=2., sqrt(2.) * scale)``` #### Examples Example of initialization of one distribution. ```python tfd = tfp.distributions # Define a single scalar Weibull distribution. dist = tfd.Weibull(concentration=1., scale=3.) # Evaluate the cdf at 1, returning a scalar. dist.cdf(1.) ``` Example of initialization of a 3-batch of distributions with varying scales and concentrations. ```python tfd = tfp.distributions # Define a 3-batch of Weibull distributions. scale = [1., 3., 45.] concentration = [2.5, 22., 7.] dist = tfd.Weibull(concentration=concentration, scale=scale) # Evaluate the cdfs at 1. dist.cdf(1.) # shape: [3] ``` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2"><h2 class="add-link">Args</h2></th></tr> <tr> <td> `concentration` </td> <td> Positive Float-type `Tensor`, the concentration param of the distribution. Must contain only positive values. </td> </tr><tr> <td> `scale` </td> <td> Positive Float-type `Tensor`, the scale param of the distribution. Must contain only positive values. </td> </tr><tr> <td> `validate_args` </td> <td> Python `bool` indicating whether arguments should be checked for correctness. </td> </tr><tr> <td> `allow_nan_stats` </td> <td> Python `bool` indicating whether nan values should be allowed. </td> </tr><tr> <td> `name` </td> <td> Python `str` name given to ops managed by this class. Default value: `'Weibull'`. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2"><h2 class="add-link">Raises</h2></th></tr> <tr> <td> `TypeError` </td> <td> if concentration and scale are different dtypes. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2"><h2 class="add-link">Attributes</h2></th></tr> <tr> <td> `allow_nan_stats` </td> <td> Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined. </td> </tr><tr> <td> `batch_shape` </td> <td> Shape of a single sample from a single event index as a `TensorShape`. May be partially defined or unknown. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. </td> </tr><tr> <td> `bijector` </td> <td> Function transforming x => y. </td> </tr><tr> <td> `concentration` </td> <td> Distribution parameter for the concentration. </td> </tr><tr> <td> `distribution` </td> <td> Base distribution, p(x). </td> </tr><tr> <td> `dtype` </td> <td> The `DType` of `Tensor`s handled by this `Distribution`. </td> </tr><tr> <td> `event_shape` </td> <td> Shape of a single sample from a single batch as a `TensorShape`. May be partially defined or unknown. </td> </tr><tr> <td> `name` </td> <td> Name prepended to all ops created by this `Distribution`. </td> </tr><tr> <td> `name_scope` </td> <td> Returns a <a href="https://www.tensorflow.org/api_docs/python/tf/name_scope"><code>tf.name_scope</code></a> instance for this class. </td> </tr><tr> <td> `parameters` </td> <td> Dictionary of parameters used to instantiate this `Distribution`. </td> </tr><tr> <td> `reparameterization_type` </td> <td> Describes how samples from the distribution are reparameterized. Currently this is one of the static instances `tfd.FULLY_REPARAMETERIZED` or `tfd.NOT_REPARAMETERIZED`. </td> </tr><tr> <td> `scale` </td> <td> Distribution parameter for scale. </td> </tr><tr> <td> `submodules` </td> <td> Sequence of all sub-modules. Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on). <pre class="devsite-click-to-copy prettyprint lang-py"> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">a = tf.Module()</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">b = tf.Module()</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">c = tf.Module()</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">a.b = b</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">b.c = c</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">list(a.submodules) == [b, c]</code> <code class="no-select nocode">True</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">list(b.submodules) == [c]</code> <code class="no-select nocode">True</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">list(c.submodules) == []</code> <code class="no-select nocode">True</code> </pre> </td> </tr><tr> <td> `trainable_variables` </td> <td> Sequence of trainable variables owned by this module and its submodules. Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don't expect the return value to change. </td> </tr><tr> <td> `validate_args` </td> <td> Python `bool` indicating possibly expensive checks are enabled. </td> </tr><tr> <td> `variables` </td> <td> Sequence of variables owned by this module and its submodules. Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don't expect the return value to change. </td> </tr> </table> ## Methods <h3 id="batch_shape_tensor"><code>batch_shape_tensor</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L772-L805">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>batch_shape_tensor( name='batch_shape_tensor' ) </code></pre> Shape of a single sample from a single event index as a 1-D `Tensor`. The batch dimensions are indexes into independent, non-identical parameterizations of this distribution. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> name to give to the op </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `batch_shape` </td> <td> `Tensor`. </td> </tr> </table> <h3 id="cdf"><code>cdf</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1037-L1055">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>cdf( value, name='cdf', **kwargs ) </code></pre> Cumulative distribution function. Given random variable `X`, the cumulative distribution function `cdf` is: ```none cdf(x) := P[X <= x] ``` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `cdf` </td> <td> a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="copy"><code>copy</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L738-L766">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>copy( **override_parameters_kwargs ) </code></pre> Creates a deep copy of the distribution. Note: the copy distribution may continue to depend on the original initialization arguments. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `**override_parameters_kwargs` </td> <td> String/value dictionary of initialization arguments to override with new values. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `distribution` </td> <td> A new instance of `type(self)` initialized from the union of self.parameters and override_parameters_kwargs, i.e., `dict(self.parameters, **override_parameters_kwargs)`. </td> </tr> </table> <h3 id="covariance"><code>covariance</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1266-L1304">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>covariance( name='covariance', **kwargs ) </code></pre> Covariance. Covariance is (possibly) defined only for non-scalar-event distributions. For example, for a length-`k`, vector-valued distribution, it is calculated as, ```none Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])] ``` where `Cov` is a (batch of) `k x k` matrix, `0 <= (i, j) < k`, and `E` denotes expectation. Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance` shall return a (batch of) matrices under some vectorization of the events, i.e., ```none Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above] ``` where `Cov` is a (batch of) `k' x k'` matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`, and `Vec` is some function mapping indices of this distribution's event dimensions to indices of a length-`k'` vector. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `covariance` </td> <td> Floating-point `Tensor` with shape `[B1, ..., Bn, k', k']` where the first `n` dimensions are batch coordinates and `k' = reduce_prod(self.event_shape)`. </td> </tr> </table> <h3 id="cross_entropy"><code>cross_entropy</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1319-L1342">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>cross_entropy( other, name='cross_entropy' ) </code></pre> Computes the (Shannon) cross entropy. Denote this distribution (`self`) by `P` and the `other` distribution by `Q`. Assuming `P, Q` are absolutely continuous with respect to one another and permit densities `p(x) dr(x)` and `q(x) dr(x)`, (Shannon) cross entropy is defined as: ```none H[P, Q] = E_p[-log q(X)] = -int_F p(x) log q(x) dr(x) ``` where `F` denotes the support of the random variable `X ~ P`. `other` types with built-in registrations: `Weibull` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `other` </td> <td> <a href="../../tfp/distributions/Distribution"><code>tfp.distributions.Distribution</code></a> instance. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `cross_entropy` </td> <td> `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of (Shannon) cross entropy. </td> </tr> </table> <h3 id="entropy"><code>entropy</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1144-L1147">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>entropy( name='entropy', **kwargs ) </code></pre> Shannon entropy in nats. <h3 id="event_shape_tensor"><code>event_shape_tensor</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L837-L859">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>event_shape_tensor( name='event_shape_tensor' ) </code></pre> Shape of a single sample from a single batch as a 1-D int32 `Tensor`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> name to give to the op </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `event_shape` </td> <td> `Tensor`. </td> </tr> </table> <h3 id="is_scalar_batch"><code>is_scalar_batch</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L890-L902">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>is_scalar_batch( name='is_scalar_batch' ) </code></pre> Indicates that `batch_shape == []`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `is_scalar_batch` </td> <td> `bool` scalar `Tensor`. </td> </tr> </table> <h3 id="is_scalar_event"><code>is_scalar_event</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L876-L888">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>is_scalar_event( name='is_scalar_event' ) </code></pre> Indicates that `event_shape == []`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `is_scalar_event` </td> <td> `bool` scalar `Tensor`. </td> </tr> </table> <h3 id="kl_divergence"><code>kl_divergence</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1348-L1379">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>kl_divergence( other, name='kl_divergence' ) </code></pre> Computes the Kullback--Leibler divergence. Denote this distribution (`self`) by `p` and the `other` distribution by `q`. Assuming `p, q` are absolutely continuous with respect to reference measure `r`, the KL divergence is defined as: ```none KL[p, q] = E_p[log(p(X)/q(X))] = -int_F p(x) log q(x) dr(x) + int_F p(x) log p(x) dr(x) = H[p, q] - H[p] ``` where `F` denotes the support of the random variable `X ~ p`, `H[., .]` denotes (Shannon) cross entropy, and `H[.]` denotes (Shannon) entropy. `other` types with built-in registrations: `Weibull` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `other` </td> <td> <a href="../../tfp/distributions/Distribution"><code>tfp.distributions.Distribution</code></a> instance. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `kl_divergence` </td> <td> `self.dtype` `Tensor` with shape `[B1, ..., Bn]` representing `n` different calculations of the Kullback-Leibler divergence. </td> </tr> </table> <h3 id="log_cdf"><code>log_cdf</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1002-L1024">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>log_cdf( value, name='log_cdf', **kwargs ) </code></pre> Log cumulative distribution function. Given random variable `X`, the cumulative distribution function `cdf` is: ```none log_cdf(x) := Log[ P[X <= x] ] ``` Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `logcdf` </td> <td> a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="log_prob"><code>log_prob</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L952-L964">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>log_prob( value, name='log_prob', **kwargs ) </code></pre> Log probability density/mass function. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `log_prob` </td> <td> a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="log_survival_function"><code>log_survival_function</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1075-L1099">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>log_survival_function( value, name='log_survival_function', **kwargs ) </code></pre> Log survival function. Given random variable `X`, the survival function is defined: ```none log_survival_function(x) = Log[ P[X > x] ] = Log[ 1 - P[X <= x] ] = Log[ 1 - cdf(x) ] ``` Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)` when `x >> 1`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr class="alt"> <td colspan="2"> `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="mean"><code>mean</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1153-L1156">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>mean( name='mean', **kwargs ) </code></pre> Mean. <h3 id="mode"><code>mode</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1310-L1313">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>mode( name='mode', **kwargs ) </code></pre> Mode. <h3 id="param_shapes"><code>param_shapes</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L558-L577">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>@classmethod</code> <code>param_shapes( sample_shape, name='DistributionParamShapes' ) </code></pre> Shapes of parameters given the desired shape of a call to `sample()`. This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Subclasses should override class method `_param_shapes`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `sample_shape` </td> <td> `Tensor` or python list/tuple. Desired shape of a call to `sample()`. </td> </tr><tr> <td> `name` </td> <td> name to prepend ops with. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr class="alt"> <td colspan="2"> `dict` of parameter name to `Tensor` shapes. </td> </tr> </table> <h3 id="param_static_shapes"><code>param_static_shapes</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L579-L616">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>@classmethod</code> <code>param_static_shapes( sample_shape ) </code></pre> param_shapes with static (i.e. `TensorShape`) shapes. This is a class method that describes what key/value arguments are required to instantiate the given `Distribution` so that a particular shape is returned for that instance's call to `sample()`. Assumes that the sample's shape is known statically. Subclasses should override class method `_param_shapes` to return constant-valued tensors when constant values are fed. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `sample_shape` </td> <td> `TensorShape` or python list/tuple. Desired shape of a call to `sample()`. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr class="alt"> <td colspan="2"> `dict` of parameter name to `TensorShape`. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Raises</th></tr> <tr> <td> `ValueError` </td> <td> if `sample_shape` is a `TensorShape` and is not fully defined. </td> </tr> </table> <h3 id="prob"><code>prob</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L977-L989">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>prob( value, name='prob', **kwargs ) </code></pre> Probability density/mass function. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `prob` </td> <td> a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="quantile"><code>quantile</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1175-L1193">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>quantile( value, name='quantile', **kwargs ) </code></pre> Quantile function. Aka 'inverse cdf' or 'percent point function'. Given random variable `X` and `p in [0, 1]`, the `quantile` is: ```none quantile(p) := x such that P[X <= x] == p ``` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `quantile` </td> <td> a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="sample"><code>sample</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L924-L939">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>sample( sample_shape=(), seed=None, name='sample', **kwargs ) </code></pre> Generate samples of the specified shape. Note that a call to `sample()` without arguments will generate a single sample. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `sample_shape` </td> <td> 0D or 1D `int32` `Tensor`. Shape of the generated samples. </td> </tr><tr> <td> `seed` </td> <td> Python integer or <a href="../../tfp/util/SeedStream"><code>tfp.util.SeedStream</code></a> instance, for seeding PRNG. </td> </tr><tr> <td> `name` </td> <td> name to give to the op. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `samples` </td> <td> a `Tensor` with prepended dimensions `sample_shape`. </td> </tr> </table> <h3 id="stddev"><code>stddev</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1232-L1260">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>stddev( name='stddev', **kwargs ) </code></pre> Standard deviation. Standard deviation is defined as, ```none stddev = E[(X - E[X])**2]**0.5 ``` where `X` is the random variable associated with this distribution, `E` denotes expectation, and `stddev.shape = batch_shape + event_shape`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `stddev` </td> <td> Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. </td> </tr> </table> <h3 id="survival_function"><code>survival_function</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1118-L1138">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>survival_function( value, name='survival_function', **kwargs ) </code></pre> Survival function. Given random variable `X`, the survival function is defined: ```none survival_function(x) = P[X > x] = 1 - P[X <= x] = 1 - cdf(x). ``` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `value` </td> <td> `float` or `double` `Tensor`. </td> </tr><tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr class="alt"> <td colspan="2"> `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`. </td> </tr> </table> <h3 id="variance"><code>variance</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L1199-L1226">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>variance( name='variance', **kwargs ) </code></pre> Variance. Variance is defined as, ```none Var = E[(X - E[X])**2] ``` where `X` is the random variable associated with this distribution, `E` denotes expectation, and `Var.shape = batch_shape + event_shape`. <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `name` </td> <td> Python `str` prepended to names of ops created by this function. </td> </tr><tr> <td> `**kwargs` </td> <td> Named arguments forwarded to subclass implementation. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `variance` </td> <td> Floating-point `Tensor` with shape identical to `batch_shape + event_shape`, i.e., the same shape as `self.mean()`. </td> </tr> </table> <h3 id="with_name_scope"><code>with_name_scope</code></h3> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>@classmethod</code> <code>with_name_scope( method ) </code></pre> Decorator to automatically enter the module name scope. <pre class="devsite-click-to-copy prettyprint lang-py"> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">class MyModule(tf.Module):</code> <code class="devsite-terminal" data-terminal-prefix="..."> @tf.Module.with_name_scope</code> <code class="devsite-terminal" data-terminal-prefix="..."> def __call__(self, x):</code> <code class="devsite-terminal" data-terminal-prefix="..."> if not hasattr(self, &#x27;w&#x27;):</code> <code class="devsite-terminal" data-terminal-prefix="..."> self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))</code> <code class="devsite-terminal" data-terminal-prefix="..."> return tf.matmul(x, self.w)</code> </pre> Using the above module would produce <a href="https://www.tensorflow.org/api_docs/python/tf/Variable"><code>tf.Variable</code></a>s and <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor"><code>tf.Tensor</code></a>s whose names included the module name: <pre class="devsite-click-to-copy prettyprint lang-py"> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">mod = MyModule()</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">mod(tf.ones([1, 2]))</code> <code class="no-select nocode">&lt;tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)&gt;</code> <code class="devsite-terminal" data-terminal-prefix="&gt;&gt;&gt;">mod.w</code> <code class="no-select nocode">&lt;tf.Variable &#x27;my_module/Variable:0&#x27; shape=(2, 3) dtype=float32,</code> <code class="no-select nocode">numpy=..., dtype=float32)&gt;</code> </pre> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `method` </td> <td> The method to wrap. </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr class="alt"> <td colspan="2"> The original method wrapped such that it enters the module's name scope. </td> </tr> </table> <h3 id="__getitem__"><code>__getitem__</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/transformed_distribution.py#L288-L305">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>__getitem__( slices ) </code></pre> Slices the batch axes of this distribution, returning a new instance. ```python b = tfd.Bernoulli(logits=tf.zeros([3, 5, 7, 9])) b.batch_shape # => [3, 5, 7, 9] b2 = b[:, tf.newaxis, ..., -2:, 1::2] b2.batch_shape # => [3, 1, 5, 2, 4] x = tf.random.normal([5, 3, 2, 2]) cov = tf.matmul(x, x, transpose_b=True) chol = tf.cholesky(cov) loc = tf.random.normal([4, 1, 3, 1]) mvn = tfd.MultivariateNormalTriL(loc, chol) mvn.batch_shape # => [4, 5, 3] mvn.event_shape # => [2] mvn2 = mvn[:, 3:, ..., ::-1, tf.newaxis] mvn2.batch_shape # => [4, 2, 3, 1] mvn2.event_shape # => [2] ``` <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Args</th></tr> <tr> <td> `slices` </td> <td> slices from the [] operator </td> </tr> </table> <!-- Tabular view --> <table class="responsive fixed orange"> <colgroup><col width="214px"><col></colgroup> <tr><th colspan="2">Returns</th></tr> <tr> <td> `dist` </td> <td> A new `tfd.Distribution` instance with sliced parameters. </td> </tr> </table> <h3 id="__iter__"><code>__iter__</code></h3> <a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.11.1/tensorflow_probability/python/distributions/distribution.py#L701-L702">View source</a> <pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link"> <code>__iter__() </code></pre>