tfp.glm.Poisson

Poisson(rate=mean) where mean = exp(X @ weights).

Inherits From: ExponentialFamily

name Python str used as TF namescope for ops created by member functions. Default value: None (i.e., the subclass name).

name Returns the name of this module as passed or determined in the ctor.

name_scope Returns a tf.name_scope instance for this class.
non_trainable_variables Sequence of non-trainable variables owned by this module and its submodules.
submodules Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
list(a.submodules) == [b, c]
True
list(b.submodules) == [c]
True
list(c.submodules) == []
True

trainable_variables Sequence of trainable variables owned by this module and its submodules.

variables Sequence of variables owned by this module and its submodules.

Methods

as_distribution

View source

Builds a mean parameterized TFP Distribution from linear response.

Example:

model = tfp.glm.Bernoulli()
r = tfp.glm.compute_predicted_linear_response(x, w)
yhat = model.as_distribution(r)

Args
predicted_linear_response response-shaped Tensor representing linear predictions based on new model_coefficients, i.e., tfp.glm.compute_predicted_linear_response( model_matrix, model_coefficients, offset).
name Python str used as TF namescope for ops created by member functions. Default value: None (i.e., 'log_prob').

Returns
model tfp.distributions.Distribution-like object with mean parameterized by predicted_linear_response.

log_prob

View source

Computes D(param=mean(r)).log_prob(response) for linear response, r.

Args
response float-like Tensor representing observed ("actual") responses.
predicted_linear_response float-like Tensor corresponding to tf.linalg.matmul(model_matrix, weights).
name Python str used as TF namescope for ops created by member functions. Default value: None (i.e., 'log_prob').

Returns
log_prob Tensor with shape and dtype of predicted_linear_response representing the distribution prescribed log-probability of the observed responses.

with_name_scope

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable &#x27;my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>

Args
method The method to wrap.

Returns
The original method wrapped such that it enters the module's name scope.

__call__

View source

Computes mean(r), var(mean), d/dr mean(r) for linear response, r.

Here mean and var are the mean and variance of the sufficient statistic, which may not be the same as the mean and variance of the random variable itself. If the distribution's density has the form

p_Y(y) = h(y) Exp[dot(theta, T(y)) - A]

where theta and A are constants and h and T are known functions, then mean and var are the mean and variance of T(Y). In practice, often T(Y) := Y and in that case the distinction doesn't matter.

Args
predicted_linear_response float-like Tensor corresponding to tf.linalg.matmul(model_matrix, weights).
name Python str used as TF namescope for ops created by member functions. Default value: None (i.e., 'call').

Returns
mean Tensor with shape and dtype of predicted_linear_response representing the distribution prescribed mean, given the prescribed linear-response to mean mapping.
variance Tensor with shape and dtype of predicted_linear_response representing the distribution prescribed variance, given the prescribed linear-response to mean mapping.
grad_mean Tensor with shape and dtype of predicted_linear_response representing the gradient of the mean with respect to the linear-response and given the prescribed linear-response to mean mapping.