tf.config.experimental.get_memory_info
Stay organized with collections
Save and categorize content based on your preferences.
Get memory info for the chosen device, as a dict.
tf.config.experimental.get_memory_info(
device
)
This function returns a dict containing information about the device's memory
usage. For example:
if tf.config.list_physical_devices('GPU'):
# Returns a dict in the form {'current': <current mem usage>,
# 'peak': <peak mem usage>}
tf.config.experimental.get_memory_info('GPU:0')
Currently returns the following keys:
More keys may be added in the future, including device-specific keys.
Currently only supports GPU and TPU. If called on a CPU device, an exception
will be raised.
For GPUs, TensorFlow will allocate all the memory by default, unless changed
with tf.config.experimental.set_memory_growth
. The dict specifies only the
current and peak memory that TensorFlow is actually using, not the memory that
TensorFlow has allocated on the GPU.
Args |
device
|
Device string to get the memory information for, e.g. "GPU:0" ,
"TPU:0" . See https://www.tensorflow.org/api_docs/python/tf/device for
specifying device strings.
|
Returns |
A dict with keys 'current' and 'peak' , specifying the current and peak
memory usage respectively.
|
Raises |
ValueError
|
No device found with the device name, like '"nonexistent"'.
|
ValueError
|
Invalid device name, like '"GPU"', '"CPU:GPU"', '"CPU:"'.
|
ValueError
|
Multiple devices matched with the device name.
|
ValueError
|
Memory statistics not tracked, like '"CPU:0"'.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# tf.config.experimental.get_memory_info\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/framework/config.py#L571-L614) |\n\nGet memory info for the chosen device, as a dict.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.config.experimental.get_memory_info`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/get_memory_info)\n\n\u003cbr /\u003e\n\n tf.config.experimental.get_memory_info(\n device\n )\n\nThis function returns a dict containing information about the device's memory\nusage. For example: \n\n if tf.config.list_physical_devices('GPU'):\n # Returns a dict in the form {'current': \u003ccurrent mem usage\u003e,\n # 'peak': \u003cpeak mem usage\u003e}\n tf.config.experimental.get_memory_info('GPU:0')\n\nCurrently returns the following keys:\n\n- `'current'`: The current memory used by the device, in bytes.\n- `'peak'`: The peak memory used by the device across the run of the program, in bytes. Can be reset with [`tf.config.experimental.reset_memory_stats`](../../../tf/config/experimental/reset_memory_stats).\n\nMore keys may be added in the future, including device-specific keys.\n\nCurrently only supports GPU and TPU. If called on a CPU device, an exception\nwill be raised.\n\nFor GPUs, TensorFlow will allocate all the memory by default, unless changed\nwith [`tf.config.experimental.set_memory_growth`](../../../tf/config/experimental/set_memory_growth). The dict specifies only the\ncurrent and peak memory that TensorFlow is actually using, not the memory that\nTensorFlow has allocated on the GPU.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `device` | Device string to get the memory information for, e.g. `\"GPU:0\"`, `\"TPU:0\"`. See https://www.tensorflow.org/api_docs/python/tf/device for specifying device strings. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A dict with keys `'current'` and `'peak'`, specifying the current and peak memory usage respectively. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------|\n| `ValueError` | No device found with the device name, like '\"nonexistent\"'. |\n| `ValueError` | Invalid device name, like '\"GPU\"', '\"CPU:GPU\"', '\"CPU:\"'. |\n| `ValueError` | Multiple devices matched with the device name. |\n| `ValueError` | Memory statistics not tracked, like '\"CPU:0\"'. |\n\n\u003cbr /\u003e"]]