ConfigProtoOrBuilder

パブリック インターフェイスConfigProtoOrBuilder
既知の間接サブクラス

パブリックメソッド

抽象ブール値
containsDeviceCount (文字列キー)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
抽象ブール値
getAllowSoftPlacement ()
 Whether soft placement is allowed.
抽象的なClusterDef
getClusterDef ()
 Optional list of all workers to use in this session.
抽象的なClusterDefOrBuilder
getClusterDefOrBuilder ()
 Optional list of all workers to use in this session.
抽象マップ<文字列、整数>
getDeviceCount ()
代わりにgetDeviceCountMap()を使用してください。
抽象整数
getDeviceCountCount ()
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
抽象マップ<文字列、整数>
getDeviceCountMap ()
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
抽象整数
getDeviceCountOrDefault (文字列キー、int デフォルト値)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
抽象整数
getDeviceCountOrThrow (文字列キー)
 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.
抽象文字列
getDeviceFilters (int インデックス)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
抽象的な com.google.protobuf.ByteString
getDeviceFiltersBytes (int インデックス)
 When any filters are present sessions will ignore all devices which do not
 match the filters.
抽象整数
getDeviceFiltersCount ()
 When any filters are present sessions will ignore all devices which do not
 match the filters.
抽象リスト<String>
getDeviceFiltersList ()
 When any filters are present sessions will ignore all devices which do not
 match the filters.
抽象的なConfigProto.Experimental
get実験的()
.tensorflow.ConfigProto.Experimental experimental = 16;
抽象ConfigProto.ExperimentalOrBuilder
getExperimentalOrBuilder ()
.tensorflow.ConfigProto.Experimental experimental = 16;
抽象的なGPU オプション
getGpuOptions ()
 Options that apply to all GPUs.
抽象的なGPUOptionsOrBuilder
getGpuOptionsOrBuilder ()
 Options that apply to all GPUs.
抽象的なグラフオプション
getGraphOptions ()
 Options that apply to all graphs.
抽象的なGraphOptionsOrBuilder
getGraphOptionsOrBuilder ()
 Options that apply to all graphs.
抽象整数
getInterOpParallelismThreads ()
 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
抽象整数
getIntraOpParallelismThreads ()
 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
抽象ブール値
getIsolateSessionState ()
 If true, any resources such as Variables used in the session will not be
 shared with other sessions.
抽象ブール値
getLogDevicePlacement ()
 Whether device placements should be logged.
抽象的な長い
getOperationTimeoutInMs ()
 Global timeout for all blocking operations in this session.
抽象整数
getPlacementPeriod ()
 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
抽象RPCOptions
getRpcOptions ()
 Options that apply when this session uses the distributed runtime.
抽象RPCOptionsOrBuilder
getRpcOptionsOrBuilder ()
 Options that apply when this session uses the distributed runtime.
抽象ThreadPoolOptionProto
getSessionInterOpThreadPool (int インデックス)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
抽象整数
getSessionInterOpThreadPoolCount ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
抽象リスト< ThreadPoolOptionProto >
getSessionInterOpThreadPoolList ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
抽象ThreadPoolOptionProtoOrBuilder
getSessionInterOpThreadPoolOrBuilder (int インデックス)
 This option is experimental - it may be replaced with a different mechanism
 in the future.
抽象リスト<? ThreadPoolOptionProtoOrBuilderを拡張 >
getSessionInterOpThreadPoolOrBuilderList ()
 This option is experimental - it may be replaced with a different mechanism
 in the future.
抽象ブール値
getShareClusterDevicesInSession ()
 When true, WorkerSessions are created with device attributes from the
 full cluster.
抽象ブール値
getUsePerSessionThreads ()
 If true, use a new set of threads for this session rather than the global
 pool of threads.
抽象ブール値
hasClusterDef ()
 Optional list of all workers to use in this session.
抽象ブール値
実験中()
.tensorflow.ConfigProto.Experimental experimental = 16;
抽象ブール値
hasGpuOptions ()
 Options that apply to all GPUs.
抽象ブール値
hasGraphOptions ()
 Options that apply to all graphs.
抽象ブール値
hasRpcOptions ()
 Options that apply when this session uses the distributed runtime.

パブリックメソッド

public abstract boolean containsDeviceCount (文字列キー)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public abstract boolean getAllowSoftPlacement ()

 Whether soft placement is allowed. If allow_soft_placement is true,
 an op will be placed on CPU if
   1. there's no GPU implementation for the OP
 or
   2. no GPU devices are known or registered
 or
   3. need to co-locate with reftype input(s) which are from CPU.
 
bool allow_soft_placement = 7;

パブリック抽象ClusterDef getClusterDef ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

パブリック抽象ClusterDefOrBuilder getClusterDefOrBuilder ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

public abstract Map<String, Integer> getDeviceCount ()

代わりにgetDeviceCountMap()を使用してください。

public abstract int getDeviceCountCount ()

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public abstract Map<String, Integer> getDeviceCountMap ()

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public abstract int getDeviceCountOrDefault (文字列キー、int defaultValue)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public abstract int getDeviceCountOrThrow (文字列キー)

 Map from device type name (e.g., "CPU" or "GPU" ) to maximum
 number of devices of that type to use.  If a particular device
 type is not found in the map, the system picks an appropriate
 number.
 
map<string, int32> device_count = 1;

public abstract String getDeviceFilters (int インデックス)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

public abstract com.google.protobuf.ByteString getDeviceFiltersBytes (int インデックス)

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

public abstract int getDeviceFiltersCount ()

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

public abstract List<String> getDeviceFiltersList ()

 When any filters are present sessions will ignore all devices which do not
 match the filters. Each filter can be partially specified, e.g. "/job:ps"
 "/job:worker/replica:3", etc.
 
repeated string device_filters = 4;

パブリック抽象ConfigProto.Experimental getExperimental ()

.tensorflow.ConfigProto.Experimental experimental = 16;

パブリック抽象ConfigProto.ExperimentalOrBuilder getExperimentalOrBuilder ()

.tensorflow.ConfigProto.Experimental experimental = 16;

パブリック抽象GPUOptions getGpuOptions ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

パブリック抽象GPUOptionsOrBuilder getGpuOptionsOrBuilder ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

パブリック抽象GraphOptions getGraphOptions ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

パブリック抽象GraphOptionsOrBuilder getGraphOptionsOrBuilder ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

public abstract int getInterOpParallelismThreads ()

 Nodes that perform blocking operations are enqueued on a pool of
 inter_op_parallelism_threads available in each process.
 0 means the system picks an appropriate number.
 Negative means all operations are performed in caller's thread.
 Note that the first Session created in the process sets the
 number of threads for all future sessions unless use_per_session_threads is
 true or session_inter_op_thread_pool is configured.
 
int32 inter_op_parallelism_threads = 5;

public abstract int getIntraOpParallelismThreads ()

 The execution of an individual op (for some op types) can be
 parallelized on a pool of intra_op_parallelism_threads.
 0 means the system picks an appropriate number.
 If you create an ordinary session, e.g., from Python or C++,
 then there is exactly one intra op thread pool per process.
 The first session created determines the number of threads in this pool.
 All subsequent sessions reuse/share this one global pool.
 There are notable exceptions to the default behavior describe above:
 1. There is an environment variable  for overriding this thread pool,
    named TF_OVERRIDE_GLOBAL_THREADPOOL.
 2. When connecting to a server, such as a remote `tf.train.Server`
    instance, then this option will be ignored altogether.
 
int32 intra_op_parallelism_threads = 2;

public abstract boolean getIsolateSessionState ()

 If true, any resources such as Variables used in the session will not be
 shared with other sessions. However, when clusterspec propagation is
 enabled, this field is ignored and sessions are always isolated.
 
bool isolate_session_state = 15;

public abstract boolean getLogDevicePlacement ()

 Whether device placements should be logged.
 
bool log_device_placement = 8;

public abstract long getOperationTimeoutInMs ()

 Global timeout for all blocking operations in this session.  If non-zero,
 and not overridden on a per-operation basis, this value will be used as the
 deadline for all blocking operations.
 
int64 operation_timeout_in_ms = 11;

public abstract int getPlacementPeriod ()

 Assignment of Nodes to Devices is recomputed every placement_period
 steps until the system warms up (at which point the recomputation
 typically slows down automatically).
 
int32 placement_period = 3;

パブリック抽象RPCOptions getRpcOptions ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

パブリック抽象RPCOptionsOrBuilder getRpcOptionsOrBuilder ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;

public abstract ThreadPoolOptionProto getSessionInterOpThreadPool (int インデックス)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

public abstract int getSessionInterOpThreadPoolCount ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

public abstract List< ThreadPoolOptionProto > getSessionInterOpThreadPoolList ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

パブリック抽象ThreadPoolOptionProtoOrBuilder getSessionInterOpThreadPoolOrBuilder (int インデックス)

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

公開抄録リスト<? extends ThreadPoolOptionProtoOrBuilder > getSessionInterOpThreadPoolOrBuilderList ()

 This option is experimental - it may be replaced with a different mechanism
 in the future.
 Configures session thread pools. If this is configured, then RunOptions for
 a Run call can select the thread pool to use.
 The intended use is for when some session invocations need to run in a
 background pool limited to a small number of threads:
 - For example, a session may be configured to have one large pool (for
 regular compute) and one small pool (for periodic, low priority work);
 using the small pool is currently the mechanism for limiting the inter-op
 parallelism of the low priority work.  Note that it does not limit the
 parallelism of work spawned by a single op kernel implementation.
 - Using this setting is normally not needed in training, but may help some
 serving use cases.
 - It is also generally recommended to set the global_name field of this
 proto, to avoid creating multiple large pools. It is typically better to
 run the non-low-priority work, even across sessions, in a single large
 pool.
 
repeated .tensorflow.ThreadPoolOptionProto session_inter_op_thread_pool = 12;

public abstract boolean getShareClusterDevicesInSession ()

 When true, WorkerSessions are created with device attributes from the
 full cluster.
 This is helpful when a worker wants to partition a graph
 (for example during a PartitionedCallOp).
 
bool share_cluster_devices_in_session = 17;

public abstract boolean getUsePerSessionThreads ()

 If true, use a new set of threads for this session rather than the global
 pool of threads. Only supported by direct sessions.
 If false, use the global threads created by the first session, or the
 per-session thread pools configured by session_inter_op_thread_pool.
 This option is deprecated. The same effect can be achieved by setting
 session_inter_op_thread_pool to have one element, whose num_threads equals
 inter_op_parallelism_threads.
 
bool use_per_session_threads = 9;

public abstract boolean hasClusterDef ()

 Optional list of all workers to use in this session.
 
.tensorflow.ClusterDef cluster_def = 14;

public abstract boolean hasExperimental ()

.tensorflow.ConfigProto.Experimental experimental = 16;

public abstract boolean hasGpuOptions ()

 Options that apply to all GPUs.
 
.tensorflow.GPUOptions gpu_options = 6;

public abstract boolean hasGraphOptions ()

 Options that apply to all graphs.
 
.tensorflow.GraphOptions graph_options = 10;

public abstract boolean hasRpcOptions ()

 Options that apply when this session uses the distributed runtime.
 
.tensorflow.RPCOptions rpc_options = 13;