The generated values follow a gamma distribution with specified concentration
(alpha) and inverse scale (beta) parameters.
This is a stateless version of tf.random.gamma: if run twice with the same
seeds and shapes, it will produce the same pseudorandom numbers. The output is
consistent across multiple runs on the same hardware (and between CPU and
GPU),
but may change between versions of TensorFlow or on non-CPU/GPU hardware.
A slight difference exists in the interpretation of the shape parameter
between stateless_gamma and gamma: in gamma, the shape is always
prepended to the shape of the broadcast of alpha with beta; whereas in
stateless_gamma the shape parameter must always encompass the shapes of
each of alpha and beta (which must broadcast together to match the
trailing dimensions of shape).
The samples are differentiable w.r.t. alpha and beta.
The derivatives are computed using the approach described in
(Figurnov et al., 2018).
Example:
samples=tf.random.stateless_gamma([10,2],seed=[12,34],alpha=[0.5,1.5])# samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents# the samples drawn from each distributionsamples=tf.random.stateless_gamma([7,5,2],seed=[12,34],alpha=[.5,1.5])# samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]# represents the 7x5 samples drawn from each of the two distributionsalpha=tf.constant([[1.],[3.],[5.]])beta=tf.constant([[3.,4.]])samples=tf.random.stateless_gamma([30,3,2],seed=[12,34],alpha=alpha,beta=beta)# samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.withtf.GradientTape()astape:tape.watch([alpha,beta])loss=tf.reduce_mean(tf.square(tf.random.stateless_gamma([30,3,2],seed=[12,34],alpha=alpha,beta=beta)))dloss_dalpha,dloss_dbeta=tape.gradient(loss,[alpha,beta])# unbiased stochastic derivatives of the loss functionalpha.shape==dloss_dalpha.shape# Truebeta.shape==dloss_dbeta.shape# True
Args
shape
A 1-D integer Tensor or Python array. The shape of the output tensor.
seed
A shape [2] Tensor, the seed to the random number generator. Must have
dtype int32 or int64. (When using XLA, only int32 is allowed.)
alpha
Tensor. The concentration parameter of the gamma distribution. Must
be broadcastable with beta, and broadcastable with the rightmost
dimensions of shape.
beta
Tensor. The inverse scale parameter of the gamma distribution. Must be
broadcastable with alpha and broadcastable with the rightmost dimensions
of shape.
dtype
Floating point dtype of alpha, beta, and the output.
name
A name for the operation (optional).
Returns
samples
A Tensor of the specified shape filled with random gamma values.
For each i, each `samples[..., i] is an independent draw from the gamma
distribution with concentration alpha[i] and scale beta[i].
[null,null,["Last updated 2023-10-06 UTC."],[],[],null,["# tf.random.stateless_gamma\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.13.1/tensorflow/python/ops/stateless_random_ops.py#L599-L698) |\n\nOutputs deterministic pseudorandom values from a gamma distribution.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.random.stateless_gamma`](https://www.tensorflow.org/api_docs/python/tf/random/stateless_gamma)\n\n\u003cbr /\u003e\n\n tf.random.stateless_gamma(\n shape,\n seed,\n alpha,\n beta=None,\n dtype=../../tf/dtypes#float32,\n name=None\n )\n\nThe generated values follow a gamma distribution with specified concentration\n(`alpha`) and inverse scale (`beta`) parameters.\n\nThis is a stateless version of [`tf.random.gamma`](../../tf/random/gamma): if run twice with the same\nseeds and shapes, it will produce the same pseudorandom numbers. The output is\nconsistent across multiple runs on the same hardware (and between CPU and\nGPU),\nbut may change between versions of TensorFlow or on non-CPU/GPU hardware.\n\nA slight difference exists in the interpretation of the `shape` parameter\nbetween `stateless_gamma` and `gamma`: in `gamma`, the `shape` is always\nprepended to the shape of the broadcast of `alpha` with `beta`; whereas in\n`stateless_gamma` the `shape` parameter must always encompass the shapes of\neach of `alpha` and `beta` (which must broadcast together to match the\ntrailing dimensions of `shape`).\n| **Note:** Because internal calculations are done using `float64` and casting has `floor` semantics, we must manually map zero outcomes to the smallest possible positive floating-point value, i.e., `np.finfo(dtype).tiny`. This means that `np.finfo(dtype).tiny` occurs more frequently than it otherwise should. This bias can only happen for small values of `alpha`, i.e., `alpha \u003c\u003c 1` or large values of `beta`, i.e., `beta \u003e\u003e 1`.\n\nThe samples are differentiable w.r.t. alpha and beta.\nThe derivatives are computed using the approach described in\n(Figurnov et al., 2018).\n\n#### Example:\n\n samples = tf.random.stateless_gamma([10, 2], seed=[12, 34], alpha=[0.5, 1.5])\n # samples has shape [10, 2], where each slice [:, 0] and [:, 1] represents\n # the samples drawn from each distribution\n\n samples = tf.random.stateless_gamma([7, 5, 2], seed=[12, 34], alpha=[.5, 1.5])\n # samples has shape [7, 5, 2], where each slice [:, :, 0] and [:, :, 1]\n # represents the 7x5 samples drawn from each of the two distributions\n\n alpha = tf.constant([[1.], [3.], [5.]])\n beta = tf.constant([[3., 4.]])\n samples = tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)\n # samples has shape [30, 3, 2], with 30 samples each of 3x2 distributions.\n\n with tf.GradientTape() as tape:\n tape.watch([alpha, beta])\n loss = tf.reduce_mean(tf.square(tf.random.stateless_gamma(\n [30, 3, 2], seed=[12, 34], alpha=alpha, beta=beta)))\n dloss_dalpha, dloss_dbeta = tape.gradient(loss, [alpha, beta])\n # unbiased stochastic derivatives of the loss function\n alpha.shape == dloss_dalpha.shape # True\n beta.shape == dloss_dbeta.shape # True\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `shape` | A 1-D integer Tensor or Python array. The shape of the output tensor. |\n| `seed` | A shape \\[2\\] Tensor, the seed to the random number generator. Must have dtype `int32` or `int64`. (When using XLA, only `int32` is allowed.) |\n| `alpha` | Tensor. The concentration parameter of the gamma distribution. Must be broadcastable with `beta`, and broadcastable with the rightmost dimensions of `shape`. |\n| `beta` | Tensor. The inverse scale parameter of the gamma distribution. Must be broadcastable with `alpha` and broadcastable with the rightmost dimensions of `shape`. |\n| `dtype` | Floating point dtype of `alpha`, `beta`, and the output. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `samples` | A Tensor of the specified shape filled with random gamma values. For each i, each \\`samples\\[..., i\\] is an independent draw from the gamma distribution with concentration alpha\\[i\\] and scale beta\\[i\\]. |\n\n\u003cbr /\u003e"]]