![]() |
Exponentiated Sine Squared Kernel.
Inherits From: PositiveSemidefiniteKernel
tfp.math.psd_kernels.ExpSinSquared(
amplitude=None, length_scale=None, period=None, feature_ndims=1,
validate_args=False, name='ExpSinSquared'
)
Also known as the "Periodic" kernel, this kernel, when parameterizing a Gaussian Process, results in random functions that are periodic almost everywhere (with the same period for every dimension).
k(x, y) = amplitude**2 * exp(
-2 / length_scale ** 2 * sum_k sin(pi * |x_k - y_k| / period) ** 2)
This kernel acts over the space S = R^(D1 x D2 x D3 ... Dd)
.
Args | |
---|---|
amplitude
|
Positive floating point Tensor that controls the maximum
value of the kernel. Must be broadcastable with period , length_scale
and inputs to apply and matrix methods. A value of None is treated
like 1.
|
length_scale
|
Positive floating point Tensor that controls how sharp or
wide the kernel shape is. This provides a characteristic "unit" of
length against which |x - y| can be compared for scale. Must be
broadcastable with amplitude , period and inputs to apply and
matrix methods. A value of None is treated like 1.
|
period
|
Positive floating point Tensor that controls the period of the
kernel. Must be broadcastable with amplitude , length_scale and
inputs to apply and matrix methods. A value of None is treated
like 1.
|
feature_ndims
|
Python int number of rightmost dims to include in kernel
computation.
|
validate_args
|
If True , parameters are checked for validity despite
possibly degrading runtime performance
|
name
|
Python str name prefixed to Ops created by this class.
|
Attributes | |
---|---|
amplitude
|
Amplitude parameter. |
batch_shape
|
The batch_shape property of a PositiveSemidefiniteKernel.
This property describes the fully broadcast shape of all kernel parameters. For example, consider an ExponentiatedQuadratic kernel, which is parameterized by an amplitude and length_scale:
The batch_shape of such a kernel is derived from broadcasting the shapes of
then Note that this property defers to the private _batch_shape method, which concrete implementation sub-classes are obliged to provide. |
dtype
|
DType over which the kernel operates. |
feature_ndims
|
The number of feature dimensions.
Kernel functions generally act on pairs of inputs from some space like
or, in words: rank- |
length_scale
|
Length scale parameter. |
name
|
Name prepended to all ops created by this class. |
name_scope
|
Returns a tf.name_scope instance for this class.
|
period
|
Period parameter. |
submodules
|
Sequence of all sub-modules.
Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).
|
trainable_variables
|
Sequence of trainable variables owned by this module and its submodules. |
validate_args
|
Python bool indicating possibly expensive checks are enabled.
|
variables
|
Sequence of variables owned by this module and its submodules. |
Methods
apply
apply(
x1, x2, example_ndims=0, name='apply'
)
Apply the kernel function pairs of inputs.
Args | |
---|---|
x1
|
Tensor input to the kernel, of shape B1 + E1 + F , where B1 and
E1 may be empty (ie, no batch/example dims, resp.) and F (the
feature shape) must have rank equal to the kernel's feature_ndims
property. Batch shape must broadcast with the batch shape of x2 and
with the kernel's batch shape. Example shape must broadcast with example
shape of x2 . x1 and x2 must have the same number of example dims
(ie, same rank).
|
x2
|
Tensor input to the kernel, of shape B2 + E2 + F , where B2 and
E2 may be empty (ie, no batch/example dims, resp.) and F (the
feature shape) must have rank equal to the kernel's feature_ndims
property. Batch shape must broadcast with the batch shape of x2 and
with the kernel's batch shape. Example shape must broadcast with example
shape of x2 . x1 and x2 must have the same number of example
|
example_ndims
|
A python integer, the number of example dims in the inputs. In essence, this parameter controls how broadcasting of the kernel's batch shape with input batch shapes works. The kernel batch shape will be broadcast against everything to the left of the combined example and feature dimensions in the input shapes. |
name
|
name to give to the op |
Returns | |
---|---|
Tensor containing the results of applying the kernel function to inputs
x1 and x2 . If the kernel parameters' batch shape is Bk then the
shape of the Tensor resulting from this method call is
broadcast(Bk, B1, B2) + broadcast(E1, E2) .
|
Given an index set S
, a kernel function is mathematically defined as a
real- or complex-valued function on S
satisfying the
positive semi-definiteness constraint:
sum_i sum_j (c[i]*) c[j] k(x[i], x[j]) >= 0
for any finite collections {x[1], ..., x[N]}
in S
and
{c[1], ..., c[N]}
in the reals (or the complex plane). '*' is the complex
conjugate, in the complex case.
This method most closely resembles the function described in the
mathematical definition of a kernel. Given a PositiveSemidefiniteKernel k
with scalar parameters and inputs x
and y
in S
, apply(x, y)
yields a
single scalar value.
Examples
import tensorflow_probability as tfp
# Suppose `SomeKernel` acts on vectors (rank-1 tensors)
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# `x` and `y` are batches of five 3-D vectors:
x = np.ones([5, 3], np.float32)
y = np.ones([5, 3], np.float32)
scalar_kernel.apply(x, y).shape
# ==> [5]
The above output is the result of vectorized computation of the five values
[k(x[0], y[0]), k(x[1], y[1]), ..., k(x[4], y[4])]
Now we can consider a kernel with batched parameters:
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y).shape
# ==> Error! [2] and [5] can't broadcast.
The parameter batch shape of [2]
and the input batch shape of [5]
can't
be broadcast together. We can fix this in either of two ways:
- Give the parameter a shape of
[2, 1]
which will correctly broadcast with[5]
to yield[2, 5]
:
batch_kernel = tfp.math.psd_kernels.SomeKernel(
param=[[.2], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.apply(x, y).shape
# ==> [2, 5]
- By specifying
example_ndims
, which tells the kernel to treat the5
in the input shape as part of the "example shape", and "pushing" the kernel batch shape to the left:
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[.2, .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.apply(x, y, example_ndims=1).shape
# ==> [2, 5]
<h3 id="batch_shape_tensor"><code>batch_shape_tensor</code></h3>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.12.1/tensorflow_probability/python/math/psd_kernels/positive_semidefinite_kernel.py#L306-L318">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>batch_shape_tensor()
</code></pre>
The batch_shape property of a PositiveSemidefiniteKernel as a `Tensor`.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
`Tensor` which evaluates to a vector of integers which are the
fully-broadcast shapes of the kernel parameters.
</td>
</tr>
</table>
<h3 id="matrix"><code>matrix</code></h3>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/v0.12.1/tensorflow_probability/python/math/psd_kernels/positive_semidefinite_kernel.py#L506-L671">View source</a>
<pre class="devsite-click-to-copy prettyprint lang-py tfo-signature-link">
<code>matrix(
x1, x2, name='matrix'
)
</code></pre>
Construct (batched) matrices from (batches of) collections of inputs.
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Args</th></tr>
<tr>
<td>
`x1`
</td>
<td>
`Tensor` input to the first positional parameter of the kernel, of
shape `B1 + [e1] + F`, where `B1` may be empty (ie, no batch dims,
resp.), `e1` is a single integer (ie, `x1` has example ndims exactly 1),
and `F` (the feature shape) must have rank equal to the kernel's
`feature_ndims` property. Batch shape must broadcast with the batch
shape of `x2` and with the kernel's batch shape.
</td>
</tr><tr>
<td>
`x2`
</td>
<td>
`Tensor` input to the second positional parameter of the kernel,
shape `B2 + [e2] + F`, where `B2` may be empty (ie, no batch dims,
resp.), `e2` is a single integer (ie, `x2` has example ndims exactly 1),
and `F` (the feature shape) must have rank equal to the kernel's
`feature_ndims` property. Batch shape must broadcast with the batch
shape of `x1` and with the kernel's batch shape.
</td>
</tr><tr>
<td>
`name`
</td>
<td>
name to give to the op
</td>
</tr>
</table>
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2">Returns</th></tr>
<tr class="alt">
<td colspan="2">
`Tensor` containing the matrix (possibly batched) of kernel applications
to pairs from inputs `x1` and `x2`. If the kernel parameters' batch shape
is `Bk` then the shape of the `Tensor` resulting from this method call is
`broadcast(Bk, B1, B2) + [e1, e2]` (note this differs from `apply`: the
example dimensions are concatenated, whereas in `apply` the example dims
are broadcast together).
</td>
</tr>
</table>
Given inputs `x1` and `x2` of shapes
```none
[b1, ..., bB, e1, f1, ..., fF]
and
[c1, ..., cC, e2, f1, ..., fF]
This method computes the batch of e1 x e2
matrices resulting from applying
the kernel function to all pairs of inputs from x1
and x2
. The shape
of the batch of matrices is the result of broadcasting the batch shapes of
x1
, x2
, and the kernel parameters (see examples below). As such, it's
required that these shapes all be broadcast compatible. However, the kernel
parameter batch shapes need not broadcast against the 'example shapes' (e1
and e2
above).
When the two inputs are the (batches of) identical collections, the resulting matrix is the so-called Gram (or Gramian) matrix (https://en.wikipedia.org/wiki/Gramian_matrix).
Examples
First, consider a kernel with a single scalar parameter.
import tensorflow_probability as tfp
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# Our inputs are two lists of 3-D vectors
x = np.ones([5, 3], np.float32)
y = np.ones([4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> [5, 4]
The result comes from applying the kernel to the entries in x
and y
pairwise, across all pairs:
| k(x[0], y[0]) k(x[0], y[1]) ... k(x[0], y[3]) |
| k(x[1], y[0]) k(x[1], y[1]) ... k(x[1], y[3]) |
| ... ... ... |
| k(x[4], y[0]) k(x[4], y[1]) ... k(x[4], y[3]) |
Now consider a kernel with batched parameters with the same inputs
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> [2, 5, 4]
This results in a batch of 2 matrices, one computed from the kernel with
param = 1.
and the other with param = .5
.
We also support batching of the inputs. First, let's look at that with the scalar kernel again.
# Batch of 10 lists of 5 vectors of dimension 3
x = np.ones([10, 5, 3], np.float32)
# Batch of 10 lists of 4 vectors of dimension 3
y = np.ones([10, 4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> [10, 5, 4]
The result is a batch of 10 matrices built from the batch of 10 lists of input vectors. These batch shapes have to be broadcastable. The following will not work:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([20, 4, 3], np.float32)
scalar_kernel.matrix(x, y).shape
# ==> Error! [10] and [20] can't broadcast.
Now let's consider batches of inputs in conjunction with batches of kernel parameters. We require that the input batch shapes be broadcastable with the kernel parameter batch shapes, otherwise we get an error:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([10, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> Error! [2] and [10] can't broadcast.
The fix is to make the kernel parameter shape broadcastable with [10]
(or
reshape the inputs to be broadcastable!):
x = np.ones([10, 5, 3], np.float32)
y = np.ones([10, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[[1.], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.matrix(x, y).shape
# ==> [2, 10, 5, 4]
# Or, make the inputs broadcastable:
x = np.ones([10, 1, 5, 3], np.float32)
y = np.ones([10, 1, 4, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.matrix(x, y).shape
# ==> [10, 2, 5, 4]
Here, we have the result of applying the kernel, with 2 different parameters, to each of a batch of 10 pairs of input lists.
tensor
tensor(
x1, x2, x1_example_ndims, x2_example_ndims, name='tensor'
)
Construct (batched) tensors from (batches of) collections of inputs.
Args | |
---|---|
x1
|
Tensor input to the first positional parameter of the kernel, of
shape B1 + E1 + F , where B1 and E1 arbitrary shapes which may be
empty (ie, no batch/example dims, resp.), and F (the feature shape)
must have rank equal to the kernel's feature_ndims property. Batch
shape must broadcast with the batch shape of x2 and with the kernel's
batch shape.
|
x2
|
Tensor input to the second positional parameter of the kernel,
shape B2 + E2 + F , where B2 and E2 arbitrary shapes which may be
empty (ie, no batch/example dims, resp.), and F (the feature shape)
must have rank equal to the kernel's feature_ndims property. Batch
shape must broadcast with the batch shape of x1 and with the kernel's
batch shape.
|
x1_example_ndims
|
A python integer greater than or equal to 0, the number
of example dims in the first input. This affects both the alignment of
batch shapes and the shape of the final output of the function.
Everything left of the feature shape and the example dims in x1 is
considered "batch shape", and must broadcast as specified above.
|
x2_example_ndims
|
A python integer greater than or equal to 0, the number
of example dims in the second input. This affects both the alignment of
batch shapes and the shape of the final output of the function.
Everything left of the feature shape and the example dims in x1 is
considered "batch shape", and must broadcast as specified above.
|
name
|
name to give to the op |
Returns | |
---|---|
Tensor containing (possibly batched) kernel applications to pairs from
inputs x1 and x2 . If the kernel parameters' batch shape is Bk then
the shape of the Tensor resulting from this method call is
broadcast(Bk, B1, B2) + E1 + E2 . Note this differs from apply : the
example dimensions are concatenated, whereas in apply the example dims
are broadcast together. It also differs from matrix : the example shapes
are arbitrary here, and the result accrues a rank equal to the sum of the
ranks of the input example shapes.
|
Examples
First, consider a kernel with a single scalar parameter.
import tensorflow_probability as tfp
scalar_kernel = tfp.math.psd_kernels.SomeKernel(param=.5)
scalar_kernel.batch_shape
# ==> []
# Our inputs are two rank-2 collections of 3-D vectors
x = np.ones([5, 6, 3], np.float32)
y = np.ones([7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [5, 6, 7, 8]
# Empty example shapes work too!
x = np.ones([3], np.float32)
y = np.ones([5, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=0, x2_example_ndims=1).shape
# ==> [5]
The result comes from applying the kernel to the entries in x
and y
pairwise, across all pairs:
| k(x[0], y[0]) k(x[0], y[1]) ... k(x[0], y[3]) |
| k(x[1], y[0]) k(x[1], y[1]) ... k(x[1], y[3]) |
| ... ... ... |
| k(x[4], y[0]) k(x[4], y[1]) ... k(x[4], y[3]) |
Now consider a kernel with batched parameters.
batch_kernel = tfp.math.psd_kernels.SomeKernel(param=[1., .5])
batch_kernel.batch_shape
# ==> [2]
# Inputs are two rank-2 collections of 3-D vectors
x = np.ones([5, 6, 3], np.float32)
y = np.ones([7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [2, 5, 6, 7, 8]
We also support batching of the inputs. First, let's look at that with the scalar kernel again.
# Batch of 10 lists of 5x6 collections of dimension 3
x = np.ones([10, 5, 6, 3], np.float32)
# Batch of 10 lists of 7x8 collections of dimension 3
y = np.ones([10, 7, 8, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [10, 5, 6, 7, 8]
The result is a batch of 10 tensors built from the batch of 10 rank-2 collections of input vectors. The batch shapes have to be broadcastable. The following will not work:
x = np.ones([10, 5, 3], np.float32)
y = np.ones([20, 4, 3], np.float32)
scalar_kernel.tensor(x, y, x1_example_ndims=1, x2_example_ndims=1).shape
# ==> Error! [10] and [20] can't broadcast.
Now let's consider batches of inputs in conjunction with batches of kernel parameters. We require that the input batch shapes be broadcastable with the kernel parameter batch shapes, otherwise we get an error:
x = np.ones([10, 5, 6, 3], np.float32)
y = np.ones([10, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> Error! [2] and [10] can't broadcast.
The fix is to make the kernel parameter shape broadcastable with [10]
(or
reshape the inputs to be broadcastable!):
x = np.ones([10, 5, 6, 3], np.float32)
y = np.ones([10, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[[1.], [.5]])
batch_kernel.batch_shape
# ==> [2, 1]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [2, 10, 5, 6, 7, 8]
# Or, make the inputs broadcastable:
x = np.ones([10, 1, 5, 6, 3], np.float32)
y = np.ones([10, 1, 7, 8, 3], np.float32)
batch_kernel = tfp.math.psd_kernels.SomeKernel(
params=[1., .5])
batch_kernel.batch_shape
# ==> [2]
batch_kernel.tensor(x, y, x1_example_ndims=2, x2_example_ndims=2).shape
# ==> [10, 2, 5, 6, 7, 8]
with_name_scope
@classmethod
with_name_scope( method )
Decorator to automatically enter the module name scope.
class MyModule(tf.Module):
@tf.Module.with_name_scope
def __call__(self, x):
if not hasattr(self, 'w'):
self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
return tf.matmul(x, self.w)
Using the above module would produce tf.Variable
s and tf.Tensor
s whose
names included the module name:
mod = MyModule()
mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args | |
---|---|
method
|
The method to wrap. |
Returns | |
---|---|
The original method wrapped such that it enters the module's name scope. |
__add__
__add__(
k
)
__mul__
__mul__(
k
)