The compilation is a hint and only supported on a best-effort basis.
Example usage
withtf.xla.experimental.jit_scope():c=tf.matmul(a,b)# compiledwithtf.xla.experimental.jit_scope(compile_ops=False):d=tf.matmul(a,c)# not compiledwithtf.xla.experimental.jit_scope(compile_ops=lambdanode_def:'matmul'innode_def.op.lower()):e=tf.matmul(a,b)+d# matmul is compiled, the addition is not.
Example of separate_compiled_gradients:
# In the example below, the computations for f, g and h will all be compiled# in separate scopes.withtf.xla.experimental.jit_scope(separate_compiled_gradients=True):f=tf.matmul(a,b)g=tf.gradients([f],[a,b],name='mygrads1')h=tf.gradients([f],[a,b],name='mygrads2')
Ops that are not in the scope may be clustered and compiled with ops in
the scope with compile_ops=True, while the ops in the scope with
compile_ops=False will never be compiled.
For example
# In the example below, x and loss may be clustered and compiled together,# while y will not be compiled.withtf.xla.experimental.jit_scope():x=tf.matmul(a,b)withtf.xla.experimental.jit_scope(compile_ops=False):y=tf.matmul(c,d)loss=x+y
If you want to only compile the ops in the scope with compile_ops=True,
consider adding an outer jit_scope(compile_ops=False):
# In the example below, only x will be compiled.withtf.xla.experimental.jit_scope(compile_ops=False):withtf.xla.experimental.jit_scope():x=tf.matmul(a,b)y=tf.matmul(c,d)loss=x+y
Args
compile_ops
Whether to enable or disable compilation in the scope.
Either a Python bool, or a callable that accepts the parameter
node_def and returns a python bool.
separate_compiled_gradients
If true put each gradient subgraph into a
separate compilation scope. This gives fine-grained control over which
portions of the graph will be compiled as a single unit. Compiling
gradients separately may yield better performance for some graphs.
The scope is named based on the scope of the forward computation as well
as the name of the gradients. As a result, the gradients will be compiled
in a scope that is separate from both the forward computation, and from
other gradients.
Raises
RuntimeError
if called when eager execution is enabled.
Yields
The current scope, enabling or disabling compilation.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.xla.experimental.jit_scope\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/compiler/xla/jit.py#L36-L156) |\n\nEnable or disable JIT compilation of operators within the scope.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.xla.experimental.jit_scope`](https://www.tensorflow.org/api_docs/python/tf/xla/experimental/jit_scope)\n\n\u003cbr /\u003e\n\n @contextlib.contextmanager\n tf.xla.experimental.jit_scope(\n compile_ops=True, separate_compiled_gradients=False\n )\n\n| **Note:** This is an experimental feature.\n\nThe compilation is a hint and only supported on a best-effort basis.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Example usage ------------- ||\n|---|---|\n| \u003cbr /\u003e with tf.xla.experimental.jit_scope(): c = tf.matmul(a, b) # compiled with tf.xla.experimental.jit_scope(compile_ops=False): d = tf.matmul(a, c) # not compiled with tf.xla.experimental.jit_scope( compile_ops=lambda node_def: 'matmul' in node_def.op.lower()): e = tf.matmul(a, b) + d # matmul is compiled, the addition is not. \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\nExample of `separate_compiled_gradients`: \n\n # In the example below, the computations for f, g and h will all be compiled\n # in separate scopes.\n with tf.xla.experimental.jit_scope(\n separate_compiled_gradients=True):\n f = tf.matmul(a, b)\n g = tf.gradients([f], [a, b], name='mygrads1')\n h = tf.gradients([f], [a, b], name='mygrads2')\n\nOps that are not in the scope may be clustered and compiled with ops in\nthe scope with `compile_ops=True`, while the ops in the scope with\n`compile_ops=False` will never be compiled.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| For example ----------- ||\n|---|---|\n| \u003cbr /\u003e # In the example below, x and loss may be clustered and compiled together, # while y will not be compiled. with tf.xla.experimental.jit_scope(): x = tf.matmul(a, b) with tf.xla.experimental.jit_scope(compile_ops=False): y = tf.matmul(c, d) loss = x + y \u003cbr /\u003e ||\n\n\u003cbr /\u003e\n\nIf you want to only compile the ops in the scope with `compile_ops=True`,\nconsider adding an outer `jit_scope(compile_ops=False)`: \n\n # In the example below, only x will be compiled.\n with tf.xla.experimental.jit_scope(compile_ops=False):\n with tf.xla.experimental.jit_scope():\n x = tf.matmul(a, b)\n y = tf.matmul(c, d)\n loss = x + y\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `compile_ops` | Whether to enable or disable compilation in the scope. Either a Python bool, or a callable that accepts the parameter `node_def` and returns a python bool. |\n| `separate_compiled_gradients` | If true put each gradient subgraph into a separate compilation scope. This gives fine-grained control over which portions of the graph will be compiled as a single unit. Compiling gradients separately may yield better performance for some graphs. The scope is named based on the scope of the forward computation as well as the name of the gradients. As a result, the gradients will be compiled in a scope that is separate from both the forward computation, and from other gradients. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|----------------|--------------------------------------------|\n| `RuntimeError` | if called when eager execution is enabled. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Yields ------ ||\n|---|---|\n| The current scope, enabling or disabling compilation. ||\n\n\u003cbr /\u003e"]]