TensorFlow graph optimization with Grappler

View on TensorFlow.org View source on GitHub Download notebook


TensorFlow uses both graph and eager executions to execute computations. A tf.Graph contains a set of tf.Operation objects (ops) which represent units of computation and tf.Tensor objects which represent the units of data that flow between ops.

Grappler is the default graph optimization system in the TensorFlow runtime. Grappler applies optimizations in graph mode (within tf.function) to improve the performance of your TensorFlow computations through graph simplifications and other high-level optimizations such as inlining function bodies to enable inter-procedural optimizations. Optimizing the tf.Graph also reduces the device peak memory usage and improves hardware utilization by optimizing the mapping of graph nodes to compute resources.

Use tf.config.optimizer.set_experimental_options() for finer control over your tf.Graph optimizations.

Available graph optimizers

Grappler performs graph optimizations through a top-level driver called the MetaOptimizer. The following graph optimizers are available with TensorFlow:

  • Constant folding optimizer - Statically infers the value of tensors when possible by folding constant nodes in the graph and materializes the result using constants.
  • Arithmetic optimizer - Simplifies arithmetic operations by eliminating common subexpressions and simplifying arithmetic statements.
  • Layout optimizer - Optimizes tensor layouts to execute data format dependent operations such as convolutions more efficiently.
  • Remapper optimizer - Remaps subgraphs onto more efficient implementations by replacing commonly occuring subgraphs with optimized fused monolithic kernels.
  • Memory optimizer - Analyzes the graph to inspect the peak memory usage for each operation and inserts CPU-GPU memory copy operations for swapping GPU memory to CPU to reduce the peak memory usage.
  • Dependency optimizer - Removes or rearranges control dependencies to shorten the critical path for a model step or enables other optimizations. Also removes nodes that are effectively no-ops such as Identity.
  • Pruning optimizer - Prunes nodes that have no effect on the output from the graph. It is usually run first to reduce the size of the graph and speed up processing in other Grappler passes.
  • Function optimizer - Optimizes the function library of a TensorFlow program and inlines function bodies to enable other inter-procedural optimizations.
  • Shape optimizer - Optimizes subgraphs that operate on shape and shape related information.
  • Autoparallel optimizer - Automatically parallelizes graphs by splitting along the batch dimension. This optimizer is turned OFF by default.
  • Loop optimizer - Optimizes the graph control flow by hoisting loop-invariant subgraphs out of loops and by removing redundant stack operations in loops. Also optimizes loops with statically known trip counts and removes statically known dead branches in conditionals.
  • Scoped allocator optimizer - Introduces scoped allocators to reduce data movement and to consolidate some operations.
  • Pin to host optimizer - Swaps small operations onto the CPU. This optimizer is turned OFF by default.
  • Auto mixed precision optimizer - Converts data types to float16 where applicable to improve performance. Currently applies only to GPUs.
  • Debug stripper - Strips nodes related to debugging operations such as tf.debugging.Assert, tf.debugging.check_numerics, and tf.print from the graph. This optimizer is turned OFF by default.


import numpy as np
import timeit
import traceback
import contextlib

import tensorflow as tf

Create a context manager to easily toggle optimizer states.

def options(options):
  old_opts = tf.config.optimizer.get_experimental_options()

Compare execution performance with and without Grappler

TensorFlow 2 and beyond executes eagerly by default. Use tf.function to switch the default execution to Graph mode. Grappler runs automatically in the background to apply the graph optimizations above and improve execution performance.

Constant folding optimizer

As a preliminary example, consider a function which performs operations on constants and returns an output.

def test_function_1():
  def simple_function(input_arg):
    a = tf.constant(np.random.randn(2000,2000), dtype = tf.float32)
    c = a
    for n in range(50):
      c = c@a
    return tf.reduce_mean(c+input_arg)

  return simple_function

Turn off the constant folding optimizer and execute the function:

with options({'constant_folding': False}):
  simple_function = test_function_1()
  # Trace once
  x = tf.constant(2.2)
  print("Vanilla execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
{'constant_folding': False, 'disable_model_pruning': False, 'disable_meta_optimizer': False}
Vanilla execution: 0.0018392090000816097 s

Enable the constant folding optimizer and execute the function again to observe a speed-up in function execution.

with options({'constant_folding': True}):
  simple_function = test_function_1()
  # Trace once
  x = tf.constant(2.2)
  print("Constant folded execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
{'constant_folding': True, 'disable_model_pruning': False, 'disable_meta_optimizer': False}
Constant folded execution: 0.0006749789999958011 s

Debug stripper optimizer

Consider a simple function that checks the numeric value of its input argument and returns it.

def test_function_2():
  def simple_func(input_arg):
    output = input_arg
    tf.debugging.check_numerics(output, "Bad!")
    return output
  return simple_func

First, execute the function with the debug stripper optimizer turned off.

test_func = test_function_2()
p1 = tf.constant(float('inf'))
except tf.errors.InvalidArgumentError as e:
2021-09-22 20:34:55.871238: E tensorflow/core/kernels/check_numerics_op.cc:292] abnormal_detected_host @0x7f4878e00100 = {0, 1} Bad!
Traceback (most recent call last):
  File "/tmp/ipykernel_22954/3616845043.py", line 4, in <module>
  File "/tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 885, in __call__
    result = self._call(*args, **kwds)
tensorflow.python.framework.errors_impl.InvalidArgumentError:  Bad! : Tensor had Inf values
     [[node CheckNumerics (defined at tmp/ipykernel_22954/2241890286.py:5) ]] [Op:__inference_simple_func_131]

Errors may have originated from an input operation.
Input Source operations connected to node CheckNumerics:
 input_arg (defined at tmp/ipykernel_22954/3616845043.py:4)

Function call stack:

tf.debugging.check_numerics raises an invalid argument error because of the Inf argument to test_func.

Enable the debug stripper optimizer and execute the function again.

with options({'debug_stripper': True}):
  test_func2 = test_function_2()
  p1 = tf.constant(float('inf'))
  except tf.errors.InvalidArgumentError as e:

The debug stripper optimizer strips the tf.debug.check_numerics node from the graph and executes the function without raising any errors.


The TensorFlow runtime uses Grappler to optimize graphs automatically before execution. Use tf.config.optimizer.set_experimental_options to enable or disable the various graph optimizers.

For more information on Grappler, see TensorFlow Graph Optimizations.