GPUOptions.Builder

public static final class GPUOptions.Builder

Protobuf type tensorflow.GPUOptions

Public Methods

GPUOptions.Builder
addRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
GPUOptions
build()
GPUOptions
GPUOptions.Builder
clear()
GPUOptions.Builder
clearAllocatorType()
 The type of GPU allocation strategy to use.
GPUOptions.Builder
clearAllowGrowth()
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
GPUOptions.Builder
clearDeferredDeletionBytes()
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
GPUOptions.Builder
clearExperimental()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
clearField(com.google.protobuf.Descriptors.FieldDescriptor field)
GPUOptions.Builder
clearForceGpuCompatible()
 Force all tensors to be gpu_compatible.
GPUOptions.Builder
clearOneof(com.google.protobuf.Descriptors.OneofDescriptor oneof)
GPUOptions.Builder
clearPerProcessGpuMemoryFraction()
 Fraction of the available GPU memory to allocate for each process.
GPUOptions.Builder
clearPollingActiveDelayUsecs()
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
GPUOptions.Builder
clearPollingInactiveDelayMsecs()
 This field is deprecated and ignored.
GPUOptions.Builder
clearVisibleDeviceList()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
GPUOptions.Builder
clone()
String
getAllocatorType()
 The type of GPU allocation strategy to use.
com.google.protobuf.ByteString
getAllocatorTypeBytes()
 The type of GPU allocation strategy to use.
boolean
getAllowGrowth()
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
GPUOptions
long
getDeferredDeletionBytes()
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
final static com.google.protobuf.Descriptors.Descriptor
com.google.protobuf.Descriptors.Descriptor
GPUOptions.Experimental
getExperimental()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Experimental.Builder
getExperimentalBuilder()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.ExperimentalOrBuilder
getExperimentalOrBuilder()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
boolean
getForceGpuCompatible()
 Force all tensors to be gpu_compatible.
double
getPerProcessGpuMemoryFraction()
 Fraction of the available GPU memory to allocate for each process.
int
getPollingActiveDelayUsecs()
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
int
getPollingInactiveDelayMsecs()
 This field is deprecated and ignored.
String
getVisibleDeviceList()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
com.google.protobuf.ByteString
getVisibleDeviceListBytes()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
boolean
hasExperimental()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
final boolean
GPUOptions.Builder
mergeExperimental(GPUOptions.Experimental value)
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
mergeFrom(com.google.protobuf.Message other)
GPUOptions.Builder
mergeFrom(com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)
final GPUOptions.Builder
mergeUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
GPUOptions.Builder
setAllocatorType(String value)
 The type of GPU allocation strategy to use.
GPUOptions.Builder
setAllocatorTypeBytes(com.google.protobuf.ByteString value)
 The type of GPU allocation strategy to use.
GPUOptions.Builder
setAllowGrowth(boolean value)
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
GPUOptions.Builder
setDeferredDeletionBytes(long value)
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
GPUOptions.Builder
setExperimental(GPUOptions.Experimental.Builder builderForValue)
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
setExperimental(GPUOptions.Experimental value)
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
setField(com.google.protobuf.Descriptors.FieldDescriptor field, Object value)
GPUOptions.Builder
setForceGpuCompatible(boolean value)
 Force all tensors to be gpu_compatible.
GPUOptions.Builder
setPerProcessGpuMemoryFraction(double value)
 Fraction of the available GPU memory to allocate for each process.
GPUOptions.Builder
setPollingActiveDelayUsecs(int value)
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
GPUOptions.Builder
setPollingInactiveDelayMsecs(int value)
 This field is deprecated and ignored.
GPUOptions.Builder
setRepeatedField(com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)
final GPUOptions.Builder
setUnknownFields(com.google.protobuf.UnknownFieldSet unknownFields)
GPUOptions.Builder
setVisibleDeviceList(String value)
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
GPUOptions.Builder
setVisibleDeviceListBytes(com.google.protobuf.ByteString value)
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.

Inherited Methods

Public Methods

public GPUOptions.Builder addRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor field, Object value)

public GPUOptions build ()

public GPUOptions buildPartial ()

public GPUOptions.Builder clear ()

public GPUOptions.Builder clearAllocatorType ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

public GPUOptions.Builder clearAllowGrowth ()

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

public GPUOptions.Builder clearDeferredDeletionBytes ()

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

public GPUOptions.Builder clearExperimental ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.Builder clearField (com.google.protobuf.Descriptors.FieldDescriptor field)

public GPUOptions.Builder clearForceGpuCompatible ()

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

public GPUOptions.Builder clearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof)

public GPUOptions.Builder clearPerProcessGpuMemoryFraction ()

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

public GPUOptions.Builder clearPollingActiveDelayUsecs ()

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

public GPUOptions.Builder clearPollingInactiveDelayMsecs ()

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

public GPUOptions.Builder clearVisibleDeviceList ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

public GPUOptions.Builder clone ()

public String getAllocatorType ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

public com.google.protobuf.ByteString getAllocatorTypeBytes ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

public boolean getAllowGrowth ()

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

public GPUOptions getDefaultInstanceForType ()

public long getDeferredDeletionBytes ()

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

public static final com.google.protobuf.Descriptors.Descriptor getDescriptor ()

public com.google.protobuf.Descriptors.Descriptor getDescriptorForType ()

public GPUOptions.Experimental getExperimental ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.Experimental.Builder getExperimentalBuilder ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.ExperimentalOrBuilder getExperimentalOrBuilder ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public boolean getForceGpuCompatible ()

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

public double getPerProcessGpuMemoryFraction ()

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

public int getPollingActiveDelayUsecs ()

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

public int getPollingInactiveDelayMsecs ()

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

public String getVisibleDeviceList ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

public com.google.protobuf.ByteString getVisibleDeviceListBytes ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

public boolean hasExperimental ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public final boolean isInitialized ()

public GPUOptions.Builder mergeExperimental (GPUOptions.Experimental value)

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.Builder mergeFrom (com.google.protobuf.Message other)

public GPUOptions.Builder mergeFrom (com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry)

Throws
IOException

public final GPUOptions.Builder mergeUnknownFields (com.google.protobuf.UnknownFieldSet unknownFields)

public GPUOptions.Builder setAllocatorType (String value)

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

public GPUOptions.Builder setAllocatorTypeBytes (com.google.protobuf.ByteString value)

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

public GPUOptions.Builder setAllowGrowth (boolean value)

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

public GPUOptions.Builder setDeferredDeletionBytes (long value)

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

public GPUOptions.Builder setExperimental (GPUOptions.Experimental.Builder builderForValue)

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.Builder setExperimental (GPUOptions.Experimental value)

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

public GPUOptions.Builder setField (com.google.protobuf.Descriptors.FieldDescriptor field, Object value)

public GPUOptions.Builder setForceGpuCompatible (boolean value)

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

public GPUOptions.Builder setPerProcessGpuMemoryFraction (double value)

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

public GPUOptions.Builder setPollingActiveDelayUsecs (int value)

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

public GPUOptions.Builder setPollingInactiveDelayMsecs (int value)

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

public GPUOptions.Builder setRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor field, int index, Object value)

public final GPUOptions.Builder setUnknownFields (com.google.protobuf.UnknownFieldSet unknownFields)

public GPUOptions.Builder setVisibleDeviceList (String value)

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

public GPUOptions.Builder setVisibleDeviceListBytes (com.google.protobuf.ByteString value)

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;