View source on GitHub
|
Get memory info for the chosen device, as a dict.
tf.config.experimental.get_memory_info(
device
)
This function returns a dict containing information about the device's memory usage. For example:
if tf.config.list_physical_devices('GPU'):# Returns a dict in the form {'current': <current mem usage>,# 'peak': <peak mem usage>}tf.config.experimental.get_memory_info('GPU:0')
Currently returns the following keys:
'current': The current memory used by the device, in bytes.
'peak': The peak memory used by the device across the run of the program,
in bytes.
More keys may be added in the future, including device-specific keys.
Currently raises an exception for the CPU.
For GPUs, TensorFlow will allocate all the memory by default, unless changed
with tf.config.experimental.set_memory_growth. The dict specifies only the
current and peak memory that TensorFlow is actually using, not the memory that
TensorFlow has allocated on the GPU.
Args | |
|---|---|
device
|
Device string to get the memory information for, e.g. "GPU:0". See
https://www.tensorflow.org/api_docs/python/tf/device for specifying device
strings.
|
Returns | |
|---|---|
A dict with keys 'current' and 'peak', specifying the current and peak
memory usage respectively.
|
Raises | |
|---|---|
ValueError
|
Non-existent or CPU device specified. |
View source on GitHub