LogNormal(loc=log(mean) - log(2) / 2, scale=sqrt(log(2))) mean = softplus(X @ weights).
Inherits From: ExponentialFamily
tfp.substrates.numpy.glm.LogNormalSoftplus(
    scale=np.sqrt(np.log(2.0)), name=None
)
| Args | 
|---|
| name | Python strused as TF namescope for ops created by member
functions. Default value:None(i.e., the subclass name). | 
| Attributes | 
|---|
| name |  | 
| trainable_variables |  | 
| variables |  | 
Methods
as_distribution
View source
as_distribution(
    predicted_linear_response, name=None
)
Builds a mean parameterized TFP Distribution from linear response.
Example:
model = tfp.glm.Bernoulli()
r = tfp.glm.compute_predicted_linear_response(x, w)
yhat = model.as_distribution(r)
| Args | 
|---|
| predicted_linear_response | response-shapedTensorrepresenting linear
predictions based on newmodel_coefficients, i.e.,tfp.glm.compute_predicted_linear_response(
   model_matrix, model_coefficients, offset). | 
| name | Python strused as TF namescope for ops created by member
functions. Default value:None(i.e., 'log_prob'). | 
log_prob
View source
log_prob(
    response, predicted_linear_response, name=None
)
Computes D(param=mean(r)).log_prob(response) for linear response, r.
| Args | 
|---|
| response | float-likeTensorrepresenting observed ("actual")
responses. | 
| predicted_linear_response | float-likeTensorcorresponding totf.linalg.matmul(model_matrix, weights). | 
| name | Python strused as TF namescope for ops created by member
functions. Default value:None(i.e., 'log_prob'). | 
| Returns | 
|---|
| log_prob | Tensorwith shape and dtype ofpredicted_linear_responserepresenting the distribution prescribed log-probability of the observedresponses. | 
__call__
View source
__call__(
    predicted_linear_response, name=None
)
Computes mean(r), var(mean), d/dr mean(r) for linear response, r.
Here mean and var are the mean and variance of the sufficient statistic,
which may not be the same as the mean and variance of the random variable
itself.  If the distribution's density has the form
p_Y(y) = h(y) Exp[dot(theta, T(y)) - A]
where theta and A are constants and h and T are known functions,
then mean and var are the mean and variance of T(Y).  In practice,
often T(Y) := Y and in that case the distinction doesn't matter.
| Args | 
|---|
| predicted_linear_response | float-likeTensorcorresponding totf.linalg.matmul(model_matrix, weights). | 
| name | Python strused as TF namescope for ops created by member
functions. Default value:None(i.e., 'call'). | 
| Returns | 
|---|
| mean | Tensorwith shape and dtype ofpredicted_linear_responserepresenting the distribution prescribed mean, given the prescribed
linear-response to mean mapping. | 
| variance | Tensorwith shape and dtype ofpredicted_linear_responserepresenting the distribution prescribed variance, given the prescribed
linear-response to mean mapping. | 
| grad_mean | Tensorwith shape and dtype ofpredicted_linear_responserepresenting the gradient of the mean with respect to the
linear-response and given the prescribed linear-response to mean
mapping. |