GPUOptions.Builder คลาสสุดท้ายแบบคงที่สาธารณะ
Protobuf ประเภท tensorflow.GPUOptions
วิธีการสาธารณะ
GPUOptions.ตัวสร้าง | addRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor ช่อง ค่าอ็อบเจ็กต์) |
ตัวเลือก GPU | สร้าง () |
ตัวเลือก GPU | สร้างบางส่วน () |
GPUOptions.ตัวสร้าง | ชัดเจน () |
GPUOptions.ตัวสร้าง | clearAllocatorType () The type of GPU allocation strategy to use. |
GPUOptions.ตัวสร้าง | ชัดเจนอนุญาตการเจริญเติบโต () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.ตัวสร้าง | ชัดเจนDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.ตัวสร้าง | ชัดเจนการทดลอง () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ตัวสร้าง | clearField (ฟิลด์ com.google.protobuf.Descriptors.FieldDescriptor) |
GPUOptions.ตัวสร้าง | clearForceGpu เข้ากันได้ () Force all tensors to be gpu_compatible. |
GPUOptions.ตัวสร้าง | clearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof) |
GPUOptions.ตัวสร้าง | clearPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
GPUOptions.ตัวสร้าง | clearPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.ตัวสร้าง | clearPollingInactiveDelayMsecs () This field is deprecated and ignored. |
GPUOptions.ตัวสร้าง | รายการอุปกรณ์ที่ชัดเจนมองเห็นได้ () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.ตัวสร้าง | โคลน () |
สตริง | getAllocatorType () The type of GPU allocation strategy to use. |
com.google.protobuf.ByteString | getAllocatorTypeBytes () The type of GPU allocation strategy to use. |
บูลีน | getAllowGrowth () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
ตัวเลือก GPU | |
ยาว | getDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
com.google.protobuf.Descriptors.Descriptor แบบคงที่ขั้นสุดท้าย | รับคำอธิบาย () |
com.google.protobuf.Descriptors.Descriptor | |
GPUOptions.ทดลอง | รับการทดลอง () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Experimental.Builder | รับExperimentalBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ExperimentalOrBuilder | รับExperimentalOrBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
บูลีน | getForceGpu เข้ากันได้ () Force all tensors to be gpu_compatible. |
สองเท่า | getPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
ภายใน | getPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
ภายใน | getPollingInactiveDelayMsecs () This field is deprecated and ignored. |
สตริง | getVisibleDeviceList () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
com.google.protobuf.ByteString | getVisibleDeviceListBytes () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
บูลีน | มีการทดลอง () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
บูลีนสุดท้าย | |
GPUOptions.ตัวสร้าง | ผสานการทดลอง ( GPUOptions ค่าการทดลอง) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ตัวสร้าง | mergeFrom (com.google.protobuf.Message อื่น ๆ ) |
GPUOptions.ตัวสร้าง | mergeFrom (com.google.protobuf.CodedInputStream อินพุต com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
GPUOptions.Builder สุดท้าย | mergeUnknownFields (com.google.protobuf.UnknownFieldSetknownFields) |
GPUOptions.ตัวสร้าง | setAllocatorType (ค่าสตริง) The type of GPU allocation strategy to use. |
GPUOptions.ตัวสร้าง | setAllocatorTypeBytes (ค่า com.google.protobuf.ByteString) The type of GPU allocation strategy to use. |
GPUOptions.ตัวสร้าง | setAllowGrowth (ค่าบูลีน) If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.ตัวสร้าง | setDeferredDeletionBytes (ค่ายาว) Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.ตัวสร้าง | setExperimental ( GPUOptions.Experimental.Builder builderForValue) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ตัวสร้าง | setExperimental ( GPUOptions ค่าทดลอง) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ตัวสร้าง | setField (ฟิลด์ com.google.protobuf.Descriptors.FieldDescriptor ค่าอ็อบเจ็กต์) |
GPUOptions.ตัวสร้าง | setForceGpuCompatible (ค่าบูลีน) Force all tensors to be gpu_compatible. |
GPUOptions.ตัวสร้าง | setPerProcessGpuMemoryFraction (ค่าสองเท่า) Fraction of the available GPU memory to allocate for each process. |
GPUOptions.ตัวสร้าง | setPollingActiveDelayUsecs (ค่า int) In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.ตัวสร้าง | setPollingInactiveDelayMsecs (ค่า int) This field is deprecated and ignored. |
GPUOptions.ตัวสร้าง | setRepeatedField (ฟิลด์ com.google.protobuf.Descriptors.FieldDescriptor, ดัชนี int, ค่าอ็อบเจ็กต์) |
GPUOptions.Builder สุดท้าย | setUnknownFields (com.google.protobuf.UnknownFieldSetknownFields) |
GPUOptions.ตัวสร้าง | setVisibleDeviceList (ค่าสตริง) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.ตัวสร้าง | setVisibleDeviceListBytes (ค่า com.google.protobuf.ByteString) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
วิธีการสืบทอด
วิธีการสาธารณะ
GPUOptions.Builder สาธารณะ addRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor ฟิลด์ ค่าอ็อบเจ็กต์)
GPUOptions.Builder สาธารณะ clearAllocatorType ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
GPUOptions.Builder สาธารณะ clearAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
GPUOptions.Builder สาธารณะ clearDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
GPUOptions.Builder สาธารณะ clearExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Builder สาธารณะ clearForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
GPUOptions.Builder สาธารณะ clearPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
GPUOptions.Builder สาธารณะ clearPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
GPUOptions.Builder สาธารณะ clearPollingInactiveDelayMsecs ()
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
GPUOptions.Builder สาธารณะ clearVisibleDeviceList ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
สตริงสาธารณะ getAlocatorType ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
สาธารณะ com.google.protobuf.ByteString getAllocatorTypeBytes ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
บูลีนสาธารณะ getAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
สาธารณะยาว getDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
สาธารณะคงที่สุดท้าย com.google.protobuf.Descriptors.Descriptor getDescriptor ()
สาธารณะ com.google.protobuf.Descriptors.Descriptor getDescriptorForType ()
GPUOptions สาธารณะ ทดลอง getExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions สาธารณะ.Experimental.Builder getExperimentalBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions สาธารณะ ExperimentalOrBuilder getExperimentalOrBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
บูลีนสาธารณะ getForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
สาธารณะ double getPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
int สาธารณะ getPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
int สาธารณะ getPollingInactiveDelayMsecs ()
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
สตริงสาธารณะ getVisibleDeviceList ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
สาธารณะ com.google.protobuf.ByteString getVisibleDeviceListBytes ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
บูลีนสาธารณะ มีการทดลอง ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
บูลีนสุดท้ายสาธารณะ isInitialized ()
GPUOptions สาธารณะ Builder ผสานการทดลอง ( GPUOptions มูลค่าการทดลอง)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions สาธารณะ Builder ผสานจาก (com.google.protobuf.CodedInputStream อินพุต com.google.protobuf.ExtensionRegistryLite extensionRegistry)
ขว้าง
IOข้อยกเว้น |
---|
GPUOptions สุดท้ายสาธารณะ ตัวสร้างผสาน UnknownFields (com.google.protobuf.UnknownFieldSetknownFields)
GPUOptions.Builder สาธารณะ setAllocatorType (ค่าสตริง)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
GPUOptions.Builder สาธารณะ setAllocatorTypeBytes (ค่า com.google.protobuf.ByteString)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
GPUOptions.Builder สาธารณะ setAllowGrowth (ค่าบูลีน)
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
GPUOptions.Builder สาธารณะ setDeferredDeletionBytes (ค่ายาว)
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
GPUOptions.Builder สาธารณะ setExperimental ( GPUOptions.Experimental.Builder builderForValue)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Builder สาธารณะ setExperimental ( GPUOptions.Experimental มูลค่า)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Builder setField สาธารณะ (com.google.protobuf.Descriptors.FieldDescriptor ฟิลด์ ค่าอ็อบเจ็กต์)
GPUOptions.Builder สาธารณะ setForceGpuCompatible (ค่าบูลีน)
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
GPUOptions.Builder สาธารณะ setPerProcessGpuMemoryFraction (ค่าสองเท่า)
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
GPUOptions.Builder สาธารณะ setPollingActiveDelayUsecs (ค่า int)
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
GPUOptions.Builder สาธารณะ setPollingInactiveDelayMsecs (ค่า int)
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
GPUOptions.Builder สาธารณะ setRepeatedField (com.google.protobuf.Descriptors.FieldDescriptor ฟิลด์ ดัชนี int ค่าอ็อบเจ็กต์)
GPUOptions สุดท้ายสาธารณะ ชุดตัวสร้าง UnknownFields (com.google.protobuf.UnknownFieldSetknownFields)
GPUOptions.Builder สาธารณะ setVisibleDeviceList (ค่าสตริง)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
GPUOptions.Builder สาธารณะ setVisibleDeviceListBytes (ค่า com.google.protobuf.ByteString)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;