パブリック最終クラスCallableOptions
Defines a subgraph in another `GraphDef` as a set of feed points and nodes to be fetched or executed. Compare with the arguments to `Session::Run()`.Protobuf 型
tensorflow.CallableOptions
ネストされたクラス
クラス | CallableOptions.Builder | Defines a subgraph in another `GraphDef` as a set of feed points and nodes to be fetched or executed. |
定数
パブリックメソッド
ブール値 | containsFeedDevices (文字列キー) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
ブール値 | containsFetchDevices (文字列キー) map<string, string> fetch_devices = 7; |
ブール値 | 等しい(オブジェクトオブジェクト) |
静的CallableOptions | |
呼び出し可能なオプション | |
最終的な静的 com.google.protobuf.Descriptors.Descriptor | |
弦 | getFeed (int インデックス) Tensors to be fed in the callable. |
com.google.protobuf.ByteString | getFeedBytes (int インデックス) Tensors to be fed in the callable. |
整数 | getFeedCount () Tensors to be fed in the callable. |
マップ<文字列, 文字列> | getFeedDevices () 代わりに getFeedDevicesMap() を使用してください。 |
整数 | getFeedDevicesCount () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
マップ<文字列, 文字列> | getFeedDevicesMap () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
弦 | getFeedDevicesOrDefault (文字列キー、文字列defaultValue) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
弦 | getFeedDevicesOrThrow (文字列キー) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
com.google.protobuf.ProtocolStringList | getFeedList () Tensors to be fed in the callable. |
弦 | getFetch (int インデックス) Fetches. |
com.google.protobuf.ByteString | getFetchBytes (int インデックス) Fetches. |
整数 | getFetchCount () Fetches. |
マップ<文字列, 文字列> | getFetchDevices () 代わりに getFetchDevicesMap() を使用してください。 |
整数 | getFetchDevicesCount () map<string, string> fetch_devices = 7; |
マップ<文字列, 文字列> | getFetchDevicesMap () map<string, string> fetch_devices = 7; |
弦 | getFetchDevicesOrDefault (文字列キー、文字列defaultValue) map<string, string> fetch_devices = 7; |
弦 | getFetchDevicesOrThrow (文字列キー) map<string, string> fetch_devices = 7; |
com.google.protobuf.ProtocolStringList | getFetchList () Fetches. |
ブール値 | getFetchSkipSync () By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. |
実行オプション | getRunOptions () Options that will be applied to each run. |
RunOptionsOrBuilder | getRunOptionsOrBuilder () Options that will be applied to each run. |
整数 | |
弦 | getTarget (int インデックス) Target Nodes. |
com.google.protobuf.ByteString | getTargetBytes (int インデックス) Target Nodes. |
整数 | getTargetCount () Target Nodes. |
com.google.protobuf.ProtocolStringList | getTargetList () Target Nodes. |
テンソル接続 | getTensorConnection (int インデックス) Tensors to be connected in the callable. |
整数 | getTensorConnectionCount () Tensors to be connected in the callable. |
リスト< TensorConnection > | getTensorConnectionList () Tensors to be connected in the callable. |
TensorConnectionOrBuilder | getTensorConnectionOrBuilder (int インデックス) Tensors to be connected in the callable. |
リスト<? TensorConnectionOrBuilder を拡張 > | getTensorConnectionOrBuilderList () Tensors to be connected in the callable. |
最終的な com.google.protobuf.UnknownFieldSet | |
ブール値 | hasRunOptions () Options that will be applied to each run. |
整数 | ハッシュコード() |
最終ブール値 | |
静的CallableOptions.Builder | newBuilder () |
静的CallableOptions.Builder | newBuilder ( CallableOptionsプロトタイプ) |
CallableOptions.Builder | |
静的CallableOptions | parseDelimitedFrom (InputStream 入力) |
静的CallableOptions | parseDelimitedFrom (InputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的CallableOptions | parseFrom (ByteBuffer データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的CallableOptions | parseFrom (com.google.protobuf.CodedInputStream 入力) |
静的CallableOptions | parseFrom (byte[] データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的CallableOptions | parseFrom (ByteBuffer データ) |
静的CallableOptions | parseFrom (com.google.protobuf.CodedInputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的CallableOptions | parseFrom (com.google.protobuf.ByteString データ) |
静的CallableOptions | parseFrom (InputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的CallableOptions | parseFrom (com.google.protobuf.ByteString データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry) |
静的 | パーサー() |
CallableOptions.Builder | toビルダー() |
空所 | writeTo (com.google.protobuf.CodedOutputStream 出力) |
継承されたメソッド
定数
パブリック静的最終整数FEED_DEVICES_FIELD_NUMBER
定数値: 6
パブリック静的最終整数FEED_FIELD_NUMBER
定数値: 1
パブリック静的最終整数FETCH_DEVICES_FIELD_NUMBER
定数値: 7
パブリック静的最終整数FETCH_FIELD_NUMBER
定数値: 2
パブリック静的最終整数FETCH_SKIP_SYNC_FIELD_NUMBER
定数値: 8
パブリック静的最終整数RUN_OPTIONS_FIELD_NUMBER
定数値: 4
パブリック静的最終整数TARGET_FIELD_NUMBER
定数値: 3
public static Final int TENSOR_CONNECTION_FIELD_NUMBER
定数値: 5
パブリックメソッド
public boolean containsFeedDevices (文字列キー)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
public boolean containsFetchDevices (文字列キー)
map<string, string> fetch_devices = 7;
public booleanに等しい(オブジェクト obj)
public static Final com.google.protobuf.Descriptors.Descriptor getDescriptor ()
public String getFeed (int インデックス)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
public com.google.protobuf.ByteString getFeedBytes (int インデックス)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
public int getFeedCount ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
public int getFeedDevicesCount ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
public Map<String, String> getFeedDevicesMap ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
public String getFeedDevicesOrDefault (文字列キー、文字列defaultValue)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
public String getFeedDevicesOrThrow (文字列キー)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
public com.google.protobuf.ProtocolStringList getFeedList ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
public String getFetch (int インデックス)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
public com.google.protobuf.ByteString getFetchBytes (int インデックス)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
public int getFetchCount ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
public int getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
public Map<String, String> getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
public String getFetchDevicesOrDefault (文字列キー、文字列defaultValue)
map<string, string> fetch_devices = 7;
public String getFetchDevicesOrThrow (文字列キー)
map<string, string> fetch_devices = 7;
public com.google.protobuf.ProtocolStringList getFetchList ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
public boolean getFetchSkipSync ()
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit. If this options is set to true, the caller is responsible for ensuring that the values in the fetched tensors have been produced before they are used. The caller can do this by invoking `Device::Sync()` on the underlying device(s), or by feeding the tensors back to the same Session using `feed_devices` with the same corresponding device name.
bool fetch_skip_sync = 8;
公共 getParserForType ()
public RunOptions getRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
public RunOptionsOrBuilder getRunOptionsOrBuilder ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
public int getSerializedSize ()
public String getTarget (int インデックス)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
public com.google.protobuf.ByteString getTargetBytes (int インデックス)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
public int getTargetCount ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
public com.google.protobuf.ProtocolStringList getTargetList ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
public TensorConnection getTensorConnection (int インデックス)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
public int getTensorConnectionCount ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
public List< TensorConnection > getTensorConnectionList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
public TensorConnectionOrBuilder getTensorConnectionOrBuilder (int インデックス)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
公開リスト<? extends TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
public Final com.google.protobuf.UnknownFieldSet getUnknownFields ()
public boolean hasRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
public int hashCode ()
パブリック最終ブール値isInitialized ()
public static CallableOptions parseDelimitedFrom (InputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
IO例外 |
---|
public static CallableOptions parseFrom (ByteBuffer データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
無効なプロトコルバッファ例外 |
---|
public static CallableOptions parseFrom (byte[] データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
無効なプロトコルバッファ例外 |
---|
public static CallableOptions parseFrom (com.google.protobuf.CodedInputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
IO例外 |
---|
public static CallableOptions parseFrom (InputStream 入力、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
IO例外 |
---|
public static CallableOptions parseFrom (com.google.protobuf.ByteString データ、com.google.protobuf.ExtensionRegistryLite extensionRegistry)
投げる
無効なプロトコルバッファ例外 |
---|
パブリック静的 パーサー()
public void writeTo (com.google.protobuf.CodedOutputStream 出力)
投げる
IO例外 |
---|