tf.config.optimizer.set_experimental_options
Stay organized with collections
Save and categorize content based on your preferences.
Set experimental optimizer options.
tf.config.optimizer.set_experimental_options(
options
)
Used in the notebooks
Note that optimizations are only applied in graph mode, (within tf.function).
In addition, as these are experimental options, the list is subject to change.
Args |
options
|
Dictionary of experimental optimizer options to configure.
Valid keys:
- layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW
layout on GPU which is faster.
- constant_folding: Fold constants Statically infer the value of tensors
when possible, and materialize the result using constants.
- shape_optimization: Simplify computations made on shapes.
- remapping: Remap subgraphs onto more efficient implementations.
- arithmetic_optimization: Simplify arithmetic ops with common
sub-expression elimination and arithmetic simplification.
- dependency_optimization: Control dependency optimizations. Remove
redundant control dependencies, which may enable other optimization.
This optimizer is also essential for pruning Identity and NoOp nodes.
- loop_optimization: Loop optimizations.
- function_optimization: Function optimizations and inlining.
- debug_stripper: Strips debug-related nodes from the graph.
- disable_model_pruning: Disable removal of unnecessary ops from the graph
- scoped_allocator_optimization: Try to allocate some independent Op
outputs contiguously in order to merge or eliminate downstream Ops.
- pin_to_host_optimization: Force small ops onto the CPU.
- implementation_selector: Enable the swap of kernel implementations based
on the device placement.
- auto_mixed_precision: Change certain float32 ops to float16 on Volta
GPUs and above. Without the use of loss scaling, this can cause
numerical underflow (see
keras.mixed_precision.experimental.LossScaleOptimizer ).
- disable_meta_optimizer: Disable the entire meta optimizer.
- min_graph_nodes: The minimum number of nodes in a graph to optimizer.
For smaller graphs, optimization is skipped.
- auto_parallel: Automatically parallelizes graphs by splitting along
the batch dimension
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.config.optimizer.set_experimental_options\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/framework/config.py#L214-L254) |\n\nSet experimental optimizer options.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.config.optimizer.set_experimental_options`](https://www.tensorflow.org/api_docs/python/tf/config/optimizer/set_experimental_options)\n\n\u003cbr /\u003e\n\n tf.config.optimizer.set_experimental_options(\n options\n )\n\n### Used in the notebooks\n\n| Used in the guide |\n|------------------------------------------------------------------------------------------------------|\n| - [TensorFlow graph optimization with Grappler](https://www.tensorflow.org/guide/graph_optimization) |\n\nNote that optimizations are only applied in graph mode, (within tf.function).\nIn addition, as these are experimental options, the list is subject to change.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `options` | Dictionary of experimental optimizer options to configure. Valid keys: \u003cbr /\u003e - layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW layout on GPU which is faster. - constant_folding: Fold constants Statically infer the value of tensors when possible, and materialize the result using constants. - shape_optimization: Simplify computations made on shapes. - remapping: Remap subgraphs onto more efficient implementations. - arithmetic_optimization: Simplify arithmetic ops with common sub-expression elimination and arithmetic simplification. - dependency_optimization: Control dependency optimizations. Remove redundant control dependencies, which may enable other optimization. This optimizer is also essential for pruning Identity and NoOp nodes. - loop_optimization: Loop optimizations. - function_optimization: Function optimizations and inlining. - debug_stripper: Strips debug-related nodes from the graph. - disable_model_pruning: Disable removal of unnecessary ops from the graph - scoped_allocator_optimization: Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops. - pin_to_host_optimization: Force small ops onto the CPU. - implementation_selector: Enable the swap of kernel implementations based on the device placement. - auto_mixed_precision: Change certain float32 ops to float16 on Volta GPUs and above. Without the use of loss scaling, this can cause numerical underflow (see `keras.mixed_precision.experimental.LossScaleOptimizer`). - disable_meta_optimizer: Disable the entire meta optimizer. - min_graph_nodes: The minimum number of nodes in a graph to optimizer. For smaller graphs, optimization is skipped. - auto_parallel: Automatically parallelizes graphs by splitting along the batch dimension |"]]