GPUOptions.Builder

GPUOptions.Builder من الفئة النهائية العامة الثابتة

نوع Protobuf tensorflow.GPUOptions

الأساليب العامة

GPUOptions.Builder
addRepeatedField (حقل com.google.protobuf.Descriptors.FieldDescriptor، قيمة الكائن)
خيارات GPU
خيارات GPU
GPUOptions.Builder
GPUOptions.Builder
كليرAllocatorType ()
 The type of GPU allocation strategy to use.
GPUOptions.Builder
كليرAllowGrowth ()
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
GPUOptions.Builder
ClearDeferredDeletionBytes ()
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
GPUOptions.Builder
واضحتجريبي ()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
ClearField (حقل com.google.protobuf.Descriptors.FieldDescriptor)
GPUOptions.Builder
كلير فورس جي بي يو كومباتيبل ()
 Force all tensors to be gpu_compatible.
GPUOptions.Builder
ClearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof)
GPUOptions.Builder
ClearPerProcessGpuMemoryFraction ()
 Fraction of the available GPU memory to allocate for each process.
GPUOptions.Builder
ClearPollingActiveDelayUsecs ()
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
GPUOptions.Builder
ClearPollingInactiveDelayMsecs ()
 This field is deprecated and ignored.
GPUOptions.Builder
ClearVisibleDeviceList ()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
GPUOptions.Builder
خيط
GetAllocatorType ()
 The type of GPU allocation strategy to use.
com.google.protobuf.ByteString
الحصول علىAllocatorTypeBytes ()
 The type of GPU allocation strategy to use.
منطقية
الحصول على السماح بالنمو ()
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
خيارات GPU
طويل
getDeferredDeletionBytes ()
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
النهائي الثابت com.google.protobuf.Descriptors.Descriptor
com.google.protobuf.Descriptors.Descriptor
GPUOptions.التجريبية
الحصول التجريبي ()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Experimental.Builder
الحصول على التجريبية ()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.ExperimentalOrBuilder
الحصول على التجريبية أو البناء ()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
منطقية
الحصول علىForceGpuCompatible ()
 Force all tensors to be gpu_compatible.
مزدوج
getPerProcessGpuMemoryFraction ()
 Fraction of the available GPU memory to allocate for each process.
كثافة العمليات
getPollingActiveDelayUsecs ()
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
كثافة العمليات
getPollingInactiveDelayMsecs ()
 This field is deprecated and ignored.
خيط
getVisibleDeviceList ()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
com.google.protobuf.ByteString
getVisibleDeviceListBytes ()
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
منطقية
تجريبي ()
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
منطقية نهائية
GPUOptions.Builder
دمج تجريبي ( GPUOptions. القيمة التجريبية)
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
دمج من (com.google.protobuf.Message أخرى)
GPUOptions.Builder
دمج من (com.google.protobuf.CodedInputStream input، com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
GPUOptions.Builder النهائي
دمجUnknownFields (com.google.protobuf.UnknownFieldSet UnknownFields)
GPUOptions.Builder
setAllocatorType (قيمة السلسلة)
 The type of GPU allocation strategy to use.
GPUOptions.Builder
setAllocatorTypeBytes (قيمة com.google.protobuf.ByteString)
 The type of GPU allocation strategy to use.
GPUOptions.Builder
setAllowGrowth (قيمة منطقية)
 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
GPUOptions.Builder
setDeferredDeletionBytes (قيمة طويلة)
 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.
GPUOptions.Builder
setExperimental ( GPUOptions.Experimental.Builder builderForValue)
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
setExperimental (قيمة GPUOptions.Experimental )
 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
GPUOptions.Builder
setField (حقل com.google.protobuf.Descriptors.FieldDescriptor، قيمة الكائن)
GPUOptions.Builder
setForceGpuCompatible (قيمة منطقية)
 Force all tensors to be gpu_compatible.
GPUOptions.Builder
setPerProcessGpuMemoryFraction (قيمة مزدوجة)
 Fraction of the available GPU memory to allocate for each process.
GPUOptions.Builder
setPollingActiveDelayUsecs (قيمة int)
 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.
GPUOptions.Builder
setPollingInactiveDelayMsecs (قيمة int)
 This field is deprecated and ignored.
GPUOptions.Builder
setRepeatedField (حقل com.google.protobuf.Descriptors.FieldDescriptor، فهرس int، قيمة الكائن)
GPUOptions.Builder النهائي
setUnknownFields (com.google.protobuf.UnknownFieldSet UnknownFields)
GPUOptions.Builder
setVisibleDeviceList (قيمة السلسلة)
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.
GPUOptions.Builder
setVisibleDeviceListBytes (قيمة com.google.protobuf.ByteString)
 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.

الطرق الموروثة

الأساليب العامة

GPUOptions.Builder العام addRepeatedField (حقل com.google.protobuf.Descriptors.FieldDescriptor، قيمة الكائن)

بناء خيارات GPU العامة ()

خيارات GPU العامة buildPartial ()

GPUOptions العامة.Builder واضح ()

GPUOptions.Builder العام ClearAllocatorType ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

GPUOptions.Builder العام ClearAllowGrowth ()

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

GPUOptions.Builder العامة ClearDeferredDeletionBytes ()

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

GPUOptions العامة.Builder ClearExperimental ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

GPUOptions.Builder ClearField العام (حقل com.google.protobuf.Descriptors.FieldDescriptor)

GPUOptions.Builder العامة ClearForceGpuCompatible ()

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

GPUOptions.Builder العام ClearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof)

GPUOptions.Builder العامة ClearPerProcessGpuMemoryFraction ()

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

GPUOptions.Builder العامة ClearPollingActiveDelayUsecs ()

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

GPUOptions.Builder العامة ClearPollingInactiveDelayMsecs ()

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

GPUOptions.Builder العامة ClearVisibleDeviceList ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

GPUOptions العامة.استنساخ البناء ( )

سلسلة getAllocatorType العامة ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

com.google.protobuf.ByteString العامة getAllocatorTypeBytes ()

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

getAllowGrowth () المنطقية العامة

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

خيارات GPU العامة getDefaultInstanceForType ()

getDeferredDeletionBytes الطويلة العامة ()

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

النهائي العام الثابت com.google.protobuf.Descriptors.Descriptor getDescriptor ()

com.google.protobuf.Descriptors.Descriptor getDescriptorForType () العام

GPUOptions العامة. getExperimental ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

GPUOptions.Experimental.Builder العامة getExperimentalBuilder ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

GPUOptions.ExperimentalOrBuilder العامة getExperimentalOrBuilder ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

getForceGpuCompatible () المنطقية العامة

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

getPerProcessGpuMemoryFraction () المزدوج العام

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

int العام getPollingActiveDelayUsecs ()

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

int العام getPollingInactiveDelayMsecs ()

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

سلسلة getVisibleDeviceList العامة ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

com.google.protobuf.ByteString العامة getVisibleDeviceListBytes ()

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

القيمة المنطقية العامة تجريبية ()

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

تمت تهيئة القيمة المنطقية النهائية العامة ()

GPUOptions.Builder mergeExperimental العام ( GPUOptions. قيمة تجريبية)

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

GPUOptions.Builder العام mergeFrom (com.google.protobuf.Message أخرى)

GPUOptions.Builder العام mergeFrom (com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)

رميات
IOEException

GPUOptions.Builder النهائي العام mergeUnknownFields (com.google.protobuf.UnknownFieldSetknownFields)

GPUOptions.Builder العام setAllocatorType (قيمة السلسلة)

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

GPUOptions.Builder العامة setAllocatorTypeBytes (قيمة com.google.protobuf.ByteString)

 The type of GPU allocation strategy to use.
 Allowed values:
 "": The empty string (default) uses a system-chosen default
     which may change over time.
 "BFC": A "Best-fit with coalescing" algorithm, simplified from a
        version of dlmalloc.
 
string allocator_type = 2;

GPUOptions.Builder العامة setAllowGrowth (قيمة منطقية)

 If true, the allocator does not pre-allocate the entire specified
 GPU memory region, instead starting small and growing as needed.
 
bool allow_growth = 4;

GPUOptions.Builder العام setDeferredDeletionBytes (قيمة طويلة)

 Delay deletion of up to this many bytes to reduce the number of
 interactions with gpu driver code.  If 0, the system chooses
 a reasonable default (several MBs).
 
int64 deferred_deletion_bytes = 3;

مجموعة GPUOptions.Builder العامة التجريبية ( GPUOptions.Experimental.Builder builderForValue)

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

مجموعة GPUOptions.Builder العامة التجريبية (قيمة GPUOptions.Experimental )

 Everything inside experimental is subject to change and is not subject
 to API stability guarantees in
 https://www.tensorflow.org/guide/version_compat.
 
.tensorflow.GPUOptions.Experimental experimental = 9;

GPUOptions.Builder العام setField (حقل com.google.protobuf.Descriptors.FieldDescriptor، قيمة الكائن)

GPUOptions.Builder العامة setForceGpuCompatible (قيمة منطقية)

 Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow,
 enabling this option forces all CPU tensors to be allocated with Cuda
 pinned memory. Normally, TensorFlow will infer which tensors should be
 allocated as the pinned memory. But in case where the inference is
 incomplete, this option can significantly speed up the cross-device memory
 copy performance as long as it fits the memory.
 Note that this option is not something that should be
 enabled by default for unknown or very large models, since all Cuda pinned
 memory is unpageable, having too much pinned memory might negatively impact
 the overall host system performance.
 
bool force_gpu_compatible = 8;

GPUOptions.Builder العامة setPerProcessGpuMemoryFraction (قيمة مزدوجة)

 Fraction of the available GPU memory to allocate for each process.
 1 means to allocate all of the GPU memory, 0.5 means the process
 allocates up to ~50% of the available GPU memory.
 GPU memory is pre-allocated unless the allow_growth option is enabled.
 If greater than 1.0, uses CUDA unified memory to potentially oversubscribe
 the amount of memory available on the GPU device by using host memory as a
 swap space. Accessing memory not available on the device will be
 significantly slower as that would require memory transfer between the host
 and the device. Options to reduce the memory requirement should be
 considered before enabling this option as this may come with a negative
 performance impact. Oversubscription using the unified memory requires
 Pascal class or newer GPUs and it is currently only supported on the Linux
 operating system. See
 https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements
 for the detailed requirements.
 
double per_process_gpu_memory_fraction = 1;

GPUOptions.Builder العامة setPollingActiveDelayUsecs (قيمة int)

 In the event polling loop sleep this many microseconds between
 PollEvents calls, when the queue is not empty.  If value is not
 set or set to 0, gets set to a non-zero default.
 
int32 polling_active_delay_usecs = 6;

GPUOptions.Builder العامة setPollingInactiveDelayMsecs (قيمة int)

 This field is deprecated and ignored.
 
int32 polling_inactive_delay_msecs = 7;

GPUOptions.Builder العام setRepeatedField (حقل com.google.protobuf.Descriptors.FieldDescriptor، مؤشر int، قيمة الكائن)

GPUOptions العام النهائي.Builder setUnknownFields (com.google.protobuf.UnknownFieldSetknownFields)

GPUOptions.Builder العامة setVisibleDeviceList (قيمة السلسلة)

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;

GPUOptions.Builder العامة setVisibleDeviceListBytes (قيمة com.google.protobuf.ByteString)

 A comma-separated list of GPU ids that determines the 'visible'
 to 'virtual' mapping of GPU devices.  For example, if TensorFlow
 can see 8 GPU devices in the process, and one wanted to map
 visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1",
 then one would specify this field as "5,3".  This field is similar in
 spirit to the CUDA_VISIBLE_DEVICES environment variable, except
 it applies to the visible GPU devices in the process.
 NOTE:
 1. The GPU driver provides the process with the visible GPUs
    in an order which is not guaranteed to have any correlation to
    the *physical* GPU id in the machine.  This field is used for
    remapping "visible" to "virtual", which means this operates only
    after the process starts.  Users are required to use vendor
    specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the
    physical to visible device mapping prior to invoking TensorFlow.
 2. In the code, the ids in this list are also called "platform GPU id"s,
    and the 'virtual' ids of GPU devices (i.e. the ids in the device
    name "/device:GPU:<id>") are also called "TF GPU id"s. Please
    refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h
    for more information.
 
string visible_device_list = 5;