tf.keras.backend.batch_dot
Stay organized with collections
Save and categorize content based on your preferences.
Batchwise dot product.
tf.keras.backend.batch_dot(
x, y, axes=None
)
batch_dot
is used to compute dot product of x
and y
when
x
and y
are data in batch, i.e. in a shape of
(batch_size, :)
.
batch_dot
results in a tensor or variable with less dimensions
than the input. If the number of dimensions is reduced to 1,
we use expand_dims
to make sure that ndim is at least 2.
Arguments |
x
|
Keras tensor or variable with ndim >= 2 .
|
y
|
Keras tensor or variable with ndim >= 2 .
|
axes
|
Tuple or list of integers with target dimensions, or single integer.
The sizes of x.shape[axes[0]] and y.shape[axes[1]] should be equal.
|
Returns |
A tensor with shape equal to the concatenation of x 's shape
(less the dimension that was summed over) and y 's shape
(less the batch dimension and the dimension that was summed over).
If the final rank is 1, we reshape it to (batch_size, 1) .
|
Examples:
x_batch = tf.keras.backend.ones(shape=(32, 20, 1))
y_batch = tf.keras.backend.ones(shape=(32, 30, 20))
xy_batch_dot = tf.keras.backend.batch_dot(x_batch, y_batch, axes=(1, 2))
tf.keras.backend.int_shape(xy_batch_dot)
(32, 1, 30)
Shape inference:
Let x
's shape be (100, 20)
and y
's shape be (100, 30, 20)
.
If axes
is (1, 2), to find the output shape of resultant tensor,
loop through each dimension in x
's shape and y
's shape:
x.shape[0]
: 100 : append to output shape
x.shape[1]
: 20 : do not append to output shape,
dimension 1 of x
has been summed over. (dot_axes[0]
= 1)
y.shape[0]
: 100 : do not append to output shape,
always ignore first dimension of y
y.shape[1]
: 30 : append to output shape
y.shape[2]
: 20 : do not append to output shape,
dimension 2 of y
has been summed over. (dot_axes[1]
= 2)
output_shape
= (100, 30)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[null,null,["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.backend.batch_dot\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/backend/batch_dot) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.2.0/tensorflow/python/keras/backend.py#L1699-L1884) |\n\nBatchwise dot product.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.backend.batch_dot`](/api_docs/python/tf/keras/backend/batch_dot)\n\n\u003cbr /\u003e\n\n tf.keras.backend.batch_dot(\n x, y, axes=None\n )\n\n`batch_dot` is used to compute dot product of `x` and `y` when\n`x` and `y` are data in batch, i.e. in a shape of\n`(batch_size, :)`.\n`batch_dot` results in a tensor or variable with less dimensions\nthan the input. If the number of dimensions is reduced to 1,\nwe use `expand_dims` to make sure that ndim is at least 2.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|--------|----------------------------------------------------------------------------------------------------------------------------------------------|\n| `x` | Keras tensor or variable with `ndim \u003e= 2`. |\n| `y` | Keras tensor or variable with `ndim \u003e= 2`. |\n| `axes` | Tuple or list of integers with target dimensions, or single integer. The sizes of `x.shape[axes[0]]` and `y.shape[axes[1]]` should be equal. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A tensor with shape equal to the concatenation of `x`'s shape (less the dimension that was summed over) and `y`'s shape (less the batch dimension and the dimension that was summed over). If the final rank is 1, we reshape it to `(batch_size, 1)`. ||\n\n\u003cbr /\u003e\n\n#### Examples:\n\n x_batch = tf.keras.backend.ones(shape=(32, 20, 1))\n y_batch = tf.keras.backend.ones(shape=(32, 30, 20))\n xy_batch_dot = tf.keras.backend.batch_dot(x_batch, y_batch, axes=(1, 2))\n tf.keras.backend.int_shape(xy_batch_dot)\n (32, 1, 30)\n\n#### Shape inference:\n\nLet `x`'s shape be `(100, 20)` and `y`'s shape be `(100, 30, 20)`.\nIf `axes` is (1, 2), to find the output shape of resultant tensor,\nloop through each dimension in `x`'s shape and `y`'s shape:\n\n- `x.shape[0]` : 100 : append to output shape\n- `x.shape[1]` : 20 : do not append to output shape, dimension 1 of `x` has been summed over. (`dot_axes[0]` = 1)\n- `y.shape[0]` : 100 : do not append to output shape, always ignore first dimension of `y`\n- `y.shape[1]` : 30 : append to output shape\n- `y.shape[2]` : 20 : do not append to output shape, dimension 2 of `y` has been summed over. (`dot_axes[1]` = 2) `output_shape` = `(100, 30)`"]]