lớp cuối cùng tĩnh công khai GPUOptions.Builder
tensorflow.GPUOptions
loại protobuf.GPUOptions
Phương pháp công khai
GPUOptions.Builder | addRepeatedField (trường com.google.protobuf.Descriptors.FieldDescriptor, Giá trị đối tượng) |
Tùy chọn GPU | xây dựng () |
Tùy chọn GPU | |
GPUOptions.Builder | thông thoáng () |
GPUOptions.Builder | clearAllocatorType () The type of GPU allocation strategy to use. |
GPUOptions.Builder | ClearAllowGrowth () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.Builder | clearDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.Builder | rõ ràngThử nghiệm () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | ClearField (trường com.google.protobuf.Descriptors.FieldDescriptor) |
GPUOptions.Builder | ClearForceGpuCompatible () Force all tensors to be gpu_compatible. |
GPUOptions.Builder | clearOneof (com.google.protobuf.Descriptors.OneofDescriptor oneof) |
GPUOptions.Builder | clearPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
GPUOptions.Builder | clearPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.Builder | clearPollingInactiveDelayMsecs () This field is deprecated and ignored. |
GPUOptions.Builder | clearVisibleDeviceList () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.Builder | dòng vô tính () |
Sợi dây | getAllocatorType () The type of GPU allocation strategy to use. |
com.google.protobuf.ByteString | getAllocatorTypeBytes () The type of GPU allocation strategy to use. |
boolean | getAllowGrowth () If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
Tùy chọn GPU | |
dài | getDeferredDeletionBytes () Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
com.google.protobuf.Descriptors.Descriptor tĩnh cuối cùng | |
com.google.protobuf.Descriptors.Descriptor | |
GPUOptions.Experimental | lấyThử nghiệm () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Experimental.Builder | getExperimentalBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.ExperimentalOrBuilder | getExperimentalOrBuilder () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
boolean | getForceGpuCompatible () Force all tensors to be gpu_compatible. |
gấp đôi | getPerProcessGpuMemoryFraction () Fraction of the available GPU memory to allocate for each process. |
int | getPollingActiveDelayUsecs () In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
int | getPollingInactiveDelayMsecs () This field is deprecated and ignored. |
Sợi dây | getVisibleDeviceList () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
com.google.protobuf.ByteString | getVisibleDeviceListBytes () A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
boolean | cóThử nghiệm () Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
boolean cuối cùng | |
GPUOptions.Builder | hợp nhấtThử nghiệm ( GPUOptions. Giá trị thử nghiệm) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | mergeFrom (com.google.protobuf.Message other) |
GPUOptions.Builder | mergeFrom (đầu vào com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
GPUOptions.Builder cuối cùng | hợp nhấtUnknownFields (com.google.protobuf.UnknownFieldSet knownFields) |
GPUOptions.Builder | setAllocatorType (Giá trị chuỗi) The type of GPU allocation strategy to use. |
GPUOptions.Builder | setAllocatorTypeBytes (giá trị com.google.protobuf.ByteString) The type of GPU allocation strategy to use. |
GPUOptions.Builder | setAllowGrowth (giá trị boolean) If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed. |
GPUOptions.Builder | setDeferredDeletionBytes (giá trị dài) Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. |
GPUOptions.Builder | setExperimental ( GPUOptions.Experimental.Builder builderForValue) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | setExperimental ( GPUOptions. Giá trị thử nghiệm) Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. |
GPUOptions.Builder | setField (trường com.google.protobuf.Descriptors.FieldDescriptor, Giá trị đối tượng) |
GPUOptions.Builder | setForceGpuCompatible (giá trị boolean) Force all tensors to be gpu_compatible. |
GPUOptions.Builder | setPerProcessGpuMemoryFraction (giá trị kép) Fraction of the available GPU memory to allocate for each process. |
GPUOptions.Builder | setPollingActiveDelayUsecs (giá trị int) In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. |
GPUOptions.Builder | setPollingInactiveDelayMsecs (giá trị int) This field is deprecated and ignored. |
GPUOptions.Builder | setRepeatedField (trường com.google.protobuf.Descriptors.FieldDescriptor, chỉ mục int, giá trị đối tượng) |
GPUOptions.Builder cuối cùng | setUnknownFields (com.google.protobuf.UnknownFieldSet knownFields) |
GPUOptions.Builder | setVisibleDeviceList (Giá trị chuỗi) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
GPUOptions.Builder | setVisibleDeviceListBytes (giá trị com.google.protobuf.ByteString) A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. |
Phương pháp kế thừa
Phương pháp công khai
công khai GPUOptions.Builder addRepeatedField (trường com.google.protobuf.Descriptors.FieldDescriptor, Giá trị đối tượng)
GPUOptions.Builder clearAllocatorType công khai ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
công khai GPUOptions.Builder clearAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
GPUOptions.Builder công khai clearDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
GPUOptions.Builder clearExperimental () công khai
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Builder công khai clearForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
công khai GPUOptions.Builder clearPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
công khai GPUOptions.Builder clearPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
GPUOptions.Builder clearPollingInactiveDelayMsecs () công khai
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
GPUOptions.Builder clearVisibleDeviceList công khai ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
Chuỗi công khai getAllocatorType ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
com.google.protobuf.ByteString getAllocatorTypeBytes công khai ()
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
boolean công khai getAllowGrowth ()
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
công khai long getDeferredDeletionBytes ()
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
công khai tĩnh cuối cùng com.google.protobuf.Descriptors.Descriptor getDescriptor ()
com.google.protobuf.Descriptors.Descriptor công khai getDescriptorForType ()
GPUOptions công khai.Experimental getExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Experimental.Builder công khai getExperimentalBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.ExperimentalOrBuilder công khai getExperimentalOrBuilder ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
boolean công khai getForceGpuCompatible ()
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
công khai gấp đôi getPerProcessGpuMemoryFraction ()
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
int công khai getPollingActiveDelayUsecs ()
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
int công khai getPollingInactiveDelayMsecs ()
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
Chuỗi công khai getVisibleDeviceList ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
com.google.protobuf.ByteString công khai getVisibleDeviceListBytes ()
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
boolean công khai hasExperimental ()
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
boolean cuối cùng công khai được khởi tạo ()
công khai GPUOptions.Builder mergeExperimental ( GPUOptions.Experimental value)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
GPUOptions.Builder mergeFrom công khai (đầu vào com.google.protobuf.CodedInputStream, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
Ném
IOException |
---|
cuối cùng công khai GPUOptions.Builder mergeUnknownFields (com.google.protobuf.UnknownFieldSet knownFields)
công khai GPUOptions.Builder setAllocatorType (Giá trị chuỗi)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
công khai GPUOptions.Builder setAllocatorTypeBytes (giá trị com.google.protobuf.ByteString)
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
string allocator_type = 2;
công khai GPUOptions.Builder setAllowGrowth (giá trị boolean)
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
bool allow_growth = 4;
công khai GPUOptions.Builder setDeferredDeletionBytes (giá trị dài)
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
int64 deferred_deletion_bytes = 3;
công khai GPUOptions.Builder setExperimental ( GPUOptions.Experimental.Builder builderForValue)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
công khai GPUOptions.Builder setExperimental ( GPUOptions.Experimental value)
Everything inside experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
.tensorflow.GPUOptions.Experimental experimental = 9;
công khai GPUOptions.Builder setField (trường com.google.protobuf.Descriptors.FieldDescriptor, Giá trị đối tượng)
công khai GPUOptions.Builder setForceGpuCompatible (giá trị boolean)
Force all tensors to be gpu_compatible. On a GPU-enabled TensorFlow, enabling this option forces all CPU tensors to be allocated with Cuda pinned memory. Normally, TensorFlow will infer which tensors should be allocated as the pinned memory. But in case where the inference is incomplete, this option can significantly speed up the cross-device memory copy performance as long as it fits the memory. Note that this option is not something that should be enabled by default for unknown or very large models, since all Cuda pinned memory is unpageable, having too much pinned memory might negatively impact the overall host system performance.
bool force_gpu_compatible = 8;
công khai GPUOptions.Builder setPerProcessGpuMemoryFraction (giá trị kép)
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory. GPU memory is pre-allocated unless the allow_growth option is enabled. If greater than 1.0, uses CUDA unified memory to potentially oversubscribe the amount of memory available on the GPU device by using host memory as a swap space. Accessing memory not available on the device will be significantly slower as that would require memory transfer between the host and the device. Options to reduce the memory requirement should be considered before enabling this option as this may come with a negative performance impact. Oversubscription using the unified memory requires Pascal class or newer GPUs and it is currently only supported on the Linux operating system. See https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements for the detailed requirements.
double per_process_gpu_memory_fraction = 1;
công khai GPUOptions.Builder setPollingActiveDelayUsecs (giá trị int)
In the event polling loop sleep this many microseconds between PollEvents calls, when the queue is not empty. If value is not set or set to 0, gets set to a non-zero default.
int32 polling_active_delay_usecs = 6;
công khai GPUOptions.Builder setPollingInactiveDelayMsecs (giá trị int)
This field is deprecated and ignored.
int32 polling_inactive_delay_msecs = 7;
công khai GPUOptions.Builder setRepeatedField (trường com.google.protobuf.Descriptors.FieldDescriptor, chỉ mục int, giá trị đối tượng)
cuối cùng công khai GPUOptions.Builder setUnknownFields (com.google.protobuf.UnknownFieldSet knownFields)
công khai GPUOptions.Builder setVisibleDeviceList (Giá trị chuỗi)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;
công khai GPUOptions.Builder setVisibleDeviceListBytes (giá trị com.google.protobuf.ByteString)
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: 1. The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow. 2. In the code, the ids in this list are also called "platform GPU id"s, and the 'virtual' ids of GPU devices (i.e. the ids in the device name "/device:GPU:<id>") are also called "TF GPU id"s. Please refer to third_party/tensorflow/core/common_runtime/gpu/gpu_id.h for more information.
string visible_device_list = 5;