공개 최종 클래스 ConfigProto
Session configuration parameters. The system picks appropriate values for fields that are not set.
tensorflow.ConfigProto
중첩 클래스
수업 | ConfigProto.Builder | Session configuration parameters. | |
수업 | ConfigProto.Experimental | Everything inside Experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat. | |
인터페이스 | ConfigProto.ExperimentalOrBuilder |
상수
공개 방법
부울 | containDeviceCount (문자열 키) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
부울 | 같음 (객체 객체) |
부울 | getAllowSoftPlacement () Whether soft placement is allowed. |
ClusterDef | getClusterDef () Optional list of all workers to use in this session. |
ClusterDefOrBuilder | getClusterDefOrBuilder () Optional list of all workers to use in this session. |
정적 ConfigProto | |
컨피그프로토 | |
최종 정적 com.google.protobuf.Descriptors.Descriptor | |
맵<문자열, 정수> | getDeviceCount () 대신 getDeviceCountMap() 사용하세요. |
정수 | getDeviceCountCount () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
맵<문자열, 정수> | getDeviceCountMap () Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
정수 | getDeviceCountOrDefault (문자열 키, int defaultValue) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
정수 | getDeviceCountOrThrow (문자열 키) Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. |
끈 | getDeviceFilters (정수 인덱스) When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ByteString | getDeviceFiltersBytes (정수 인덱스) When any filters are present sessions will ignore all devices which do not match the filters. |
정수 | getDeviceFiltersCount () When any filters are present sessions will ignore all devices which do not match the filters. |
com.google.protobuf.ProtocolStringList | getDeviceFiltersList () When any filters are present sessions will ignore all devices which do not match the filters. |
ConfigProto.Experimental | getExperimental () .tensorflow.ConfigProto.Experimental experimental = 16; |
ConfigProto.ExperimentalOrBuilder | getExperimentalOrBuilder () .tensorflow.ConfigProto.Experimental experimental = 16; |
GPU옵션 | getGpu옵션 () Options that apply to all GPUs. |
GPU옵션또는 빌더 | getGpuOptionsOrBuilder () Options that apply to all GPUs. |
그래프옵션 | getGraphOptions () Options that apply to all graphs. |
그래프옵션또는빌더 | getGraphOptionsOrBuilder () Options that apply to all graphs. |
정수 | getInterOpParallelismThreads () Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. |
정수 | getIntraOpParallelismThreads () The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. |
부울 | getIsolateSessionState () If true, any resources such as Variables used in the session will not be shared with other sessions. |
부울 | getLogDevicePlacement () Whether device placements should be logged. |
긴 | getOperationTimeoutInMs () Global timeout for all blocking operations in this session. |
정수 | getPlacementPeriod () Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically). |
RPC옵션 | getRpc옵션 () Options that apply when this session uses the distributed runtime. |
RPCOptionsOrBuilder | getRpcOptionsOrBuilder () Options that apply when this session uses the distributed runtime. |
정수 | |
ThreadPoolOptionProto | getSessionInterOpThreadPool (정수 인덱스) This option is experimental - it may be replaced with a different mechanism in the future. |
정수 | getSessionInterOpThreadPoolCount () This option is experimental - it may be replaced with a different mechanism in the future. |
목록< ThreadPoolOptionProto > | getSessionInterOpThreadPoolList () This option is experimental - it may be replaced with a different mechanism in the future. |
ThreadPoolOptionProtoOrBuilder | getSessionInterOpThreadPoolOrBuilder (int 인덱스) This option is experimental - it may be replaced with a different mechanism in the future. |
목록<? ThreadPoolOptionProtoOrBuilder 확장 > | getSessionInterOpThreadPoolOrBuilderList () This option is experimental - it may be replaced with a different mechanism in the future. |
부울 | getShareClusterDevicesInSession () When true, WorkerSessions are created with device attributes from the full cluster. |
최종 com.google.protobuf.UnknownFieldSet | |
부울 | getUsePerSessionThreads () If true, use a new set of threads for this session rather than the global pool of threads. |
부울 | hasClusterDef () Optional list of all workers to use in this session. |
부울 | 실험적 () .tensorflow.ConfigProto.Experimental experimental = 16; |
부울 | hasGpu옵션 () Options that apply to all GPUs. |
부울 | hasGraphOptions () Options that apply to all graphs. |
부울 | hasRpc옵션 () Options that apply when this session uses the distributed runtime. |
정수 | 해시코드 () |
최종 부울 | 초기화됨 () |
정적 ConfigProto.Builder | 새로운 빌더 () |
정적 ConfigProto.Builder | newBuilder ( ConfigProto 프로토타입) |
ConfigProto.Builder | |
정적 ConfigProto | parsDelimitedFrom (InputStream 입력) |
정적 ConfigProto | parseDelimitedFrom (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 ConfigProto | ParseFrom (ByteBuffer 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 ConfigProto | ParseFrom (com.google.protobuf.CodedInputStream 입력) |
정적 ConfigProto | parseFrom (byte[] 데이터, com.google.protobuf.ExtensionRegistryLite 확장Registry) |
정적 ConfigProto | parsFrom (ByteBuffer 데이터) |
정적 ConfigProto | ParseFrom (com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 ConfigProto | ParseFrom (com.google.protobuf.ByteString 데이터) |
정적 ConfigProto | ParseFrom (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 ConfigProto | ParseFrom (com.google.protobuf.ByteString 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
공전 | 파서 () |
ConfigProto.Builder | toBuilder () |
무효의 | writeTo (com.google.protobuf.CodedOutputStream 출력) |
상속된 메서드
상수
공개 정적 최종 int ALLOW_SOFT_PLACEMENT_FIELD_NUMBER
상수값: 7
공개 정적 최종 int CLUSTER_DEF_FIELD_NUMBER
상수값: 14
공개 정적 최종 int DEVICE_COUNT_FIELD_NUMBER
상수값: 1
공개 정적 최종 int DEVICE_FILTERS_FIELD_NUMBER
상수값: 4
공개 정적 최종 int EXPERIMENTAL_FIELD_NUMBER
상수값: 16
공개 정적 최종 int GPU_OPTIONS_FIELD_NUMBER
상수값: 6
공개 정적 최종 int GRAPH_OPTIONS_FIELD_NUMBER
상수값: 10
공개 정적 최종 int INTER_OP_PARALLELISM_THREADS_FIELD_NUMBER
상수값: 5
공개 정적 최종 int INTRA_OP_PARALLELISM_THREADS_FIELD_NUMBER
상수값: 2
공개 정적 최종 int ISOLATE_SESSION_STATE_FIELD_NUMBER
상수값: 15
공개 정적 최종 int LOG_DEVICE_PLACEMENT_FIELD_NUMBER
상수값: 8
공개 정적 최종 int OPERATION_TIMEOUT_IN_MS_FIELD_NUMBER
상수값: 11
공개 정적 최종 int PLACEMENT_PERIOD_FIELD_NUMBER
상수값: 3
공개 정적 최종 int RPC_OPTIONS_FIELD_NUMBER
상수값: 13
공개 정적 최종 int SESSION_INTER_OP_THREAD_POOL_FIELD_NUMBER
상수값: 12
공개 정적 최종 int SHARE_CLUSTER_DEVICES_IN_SESSION_FIELD_NUMBER
상수값: 17
공개 정적 최종 int USE_PER_SESSION_THREADS_FIELD_NUMBER
상수값: 9
공개 방법
공개 부울 containDeviceCount (문자열 키)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
공개 부울은 (객체 obj) 와 같습니다 .
공개 부울 getAllowSoftPlacement ()
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
공개 ClusterDef getClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
공개 ClusterDefOrBuilder getClusterDefOrBuilder ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
공개 정적 최종 com.google.protobuf.Descriptors.Descriptor getDescriptor ()
공개 int getDeviceCountCount ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
공개 맵<String, Integer> getDeviceCountMap ()
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public int getDeviceCountOrDefault (문자열 키, int defaultValue)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
public int getDeviceCountOrThrow (문자열 키)
Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
map<string, int32> device_count = 1;
공개 문자열 getDeviceFilters (int 인덱스)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
공개 com.google.protobuf.ByteString getDeviceFiltersBytes (int 인덱스)
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
공개 int getDeviceFiltersCount ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
공개 com.google.protobuf.ProtocolStringList getDeviceFiltersList ()
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
repeated string device_filters = 4;
공개 ConfigProto.Experimental getExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
공개 ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
공개 GPUOptionsOrBuilder getGpuOptionsOrBuilder ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
공개 GraphOptions getGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
공개 GraphOptionsOrBuilder getGraphOptionsOrBuilder ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
공개 int getInterOpParallelismThreads ()
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Negative means all operations are performed in caller's thread. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
int32 inter_op_parallelism_threads = 5;
공개 int getIntraOpParallelismThreads ()
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number. If you create an ordinary session, e.g., from Python or C++, then there is exactly one intra op thread pool per process. The first session created determines the number of threads in this pool. All subsequent sessions reuse/share this one global pool. There are notable exceptions to the default behavior describe above: 1. There is an environment variable for overriding this thread pool, named TF_OVERRIDE_GLOBAL_THREADPOOL. 2. When connecting to a server, such as a remote `tf.train.Server` instance, then this option will be ignored altogether.
int32 intra_op_parallelism_threads = 2;
공개 부울 getIsolateSessionState ()
If true, any resources such as Variables used in the session will not be shared with other sessions. However, when clusterspec propagation is enabled, this field is ignored and sessions are always isolated.
bool isolate_session_state = 15;
공개 부울 getLogDevicePlacement ()
Whether device placements should be logged.
bool log_device_placement = 8;
공개 긴 getOperationTimeoutInMs ()
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
int64 operation_timeout_in_ms = 11;
공공의 getParserForType ()
공개 int getPlacementPeriod ()
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
int32 placement_period = 3;
공개 RPCOptions getRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
공개 RPCOptionsOrBuilder getRpcOptionsOrBuilder ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
공개 int getSerializedSize ()
공용 ThreadPoolOptionProto getSessionInterOpThreadPool (int 인덱스)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
공개 int getSessionInterOpThreadPoolCount ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
공개 목록< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
공개 ThreadPoolOptionProtoOrBuilder getSessionInterOpThreadPoolOrBuilder (int 인덱스)
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
공개 목록<? ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()를 확장합니다.
This option is experimental - it may be replaced with a different mechanism in the future. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads: - For example, a session may be configured to have one large pool (for regular compute) and one small pool (for periodic, low priority work); using the small pool is currently the mechanism for limiting the inter-op parallelism of the low priority work. Note that it does not limit the parallelism of work spawned by a single op kernel implementation. - Using this setting is normally not needed in training, but may help some serving use cases. - It is also generally recommended to set the global_name field of this proto, to avoid creating multiple large pools. It is typically better to run the non-low-priority work, even across sessions, in a single large pool.
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;
공개 부울 getShareClusterDevicesInSession ()
When true, WorkerSessions are created with device attributes from the full cluster. This is helpful when a worker wants to partition a graph (for example during a PartitionedCallOp).
bool share_cluster_devices_in_session = 17;
공개 최종 com.google.protobuf.UnknownFieldSet getUnknownFields ()
공개 부울 getUsePerSessionThreads ()
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
bool use_per_session_threads = 9;
공개 부울 hasClusterDef ()
Optional list of all workers to use in this session.
.tensorflow.ClusterDef cluster_def = 14;
공개 부울 hasExperimental ()
.tensorflow.ConfigProto.Experimental experimental = 16;
공개 부울 hasGpuOptions ()
Options that apply to all GPUs.
.tensorflow.GPUOptions gpu_options = 6;
공개 부울 hasGraphOptions ()
Options that apply to all graphs.
.tensorflow.GraphOptions graph_options = 10;
공개 부울 hasRpcOptions ()
Options that apply when this session uses the distributed runtime.
.tensorflow.RPCOptions rpc_options = 13;
공개 int hashCode ()
공개 최종 부울 isInitialized ()
공개 정적 ConfigProtoparseDelimitedFrom ( InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
IO예외 |
---|
공개 정적 ConfigProto 구문 분석 (ByteBuffer 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 ConfigProto 구문 분석 (byte[] 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 ConfigProto parsFrom (com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
IO예외 |
---|
공개 정적 ConfigProto parsFrom (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
IO예외 |
---|
공개 정적 ConfigProto parsFrom (com.google.protobuf.ByteString 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 파서 ()
공개 무효 writeTo (com.google.protobuf.CodedOutputStream 출력)
던지기
IO예외 |
---|