공개 최종 클래스 CallableOptions
Defines a subgraph in another `GraphDef` as a set of feed points and nodes to be fetched or executed. Compare with the arguments to `Session::Run()`.
tensorflow.CallableOptions
중첩 클래스
수업 | CallableOptions.Builder | Defines a subgraph in another `GraphDef` as a set of feed points and nodes to be fetched or executed. |
상수
공개 방법
부울 | containFeedDevices (문자열 키) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
부울 | containFetchDevices (문자열 키) map<string, string> fetch_devices = 7; |
부울 | 같음 (객체 객체) |
정적 호출 가능 옵션 | |
호출 가능 옵션 | |
최종 정적 com.google.protobuf.Descriptors.Descriptor | |
끈 | getFeed (정수 인덱스) Tensors to be fed in the callable. |
com.google.protobuf.ByteString | getFeedBytes (정수 인덱스) Tensors to be fed in the callable. |
정수 | getFeedCount () Tensors to be fed in the callable. |
맵<문자열, 문자열> | getFeedDevices () 대신 getFeedDevicesMap() 사용하세요. |
정수 | getFeedDevicesCount () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
맵<문자열, 문자열> | getFeedDevicesMap () The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
끈 | getFeedDevicesOrDefault (문자열 키, 문자열 defaultValue) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
끈 | getFeedDevicesOrThrow (문자열 키) The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. |
com.google.protobuf.ProtocolStringList | getFeedList () Tensors to be fed in the callable. |
끈 | getFetch (정수 인덱스) Fetches. |
com.google.protobuf.ByteString | getFetchBytes (정수 인덱스) Fetches. |
정수 | getFetchCount () Fetches. |
맵<문자열, 문자열> | getFetchDevices () 대신 getFetchDevicesMap() 사용하세요. |
정수 | getFetchDevicesCount () map<string, string> fetch_devices = 7; |
맵<문자열, 문자열> | getFetchDevicesMap () map<string, string> fetch_devices = 7; |
끈 | getFetchDevicesOrDefault (문자열 키, 문자열 defaultValue) map<string, string> fetch_devices = 7; |
끈 | getFetchDevicesOrThrow (문자열 키) map<string, string> fetch_devices = 7; |
com.google.protobuf.ProtocolStringList | getFetchList () Fetches. |
부울 | getFetchSkipSync () By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. |
실행옵션 | getRunOptions () Options that will be applied to each run. |
RunOptionsOrBuilder | getRunOptionsOrBuilder () Options that will be applied to each run. |
정수 | |
끈 | getTarget (정수 인덱스) Target Nodes. |
com.google.protobuf.ByteString | getTargetBytes (정수 인덱스) Target Nodes. |
정수 | getTargetCount () Target Nodes. |
com.google.protobuf.ProtocolStringList | getTargetList () Target Nodes. |
텐서커넥션 | getTensorConnection (정수 인덱스) Tensors to be connected in the callable. |
정수 | getTensorConnectionCount () Tensors to be connected in the callable. |
목록< TensorConnection > | getTensorConnectionList () Tensors to be connected in the callable. |
TensorConnectionOrBuilder | getTensorConnectionOrBuilder (정수 인덱스) Tensors to be connected in the callable. |
목록<? TensorConnectionOrBuilder 확장 > | getTensorConnectionOrBuilderList () Tensors to be connected in the callable. |
최종 com.google.protobuf.UnknownFieldSet | |
부울 | hasRunOptions () Options that will be applied to each run. |
정수 | 해시코드 () |
최종 부울 | 초기화됨 () |
정적 CallableOptions.Builder | 새로운 빌더 () |
정적 CallableOptions.Builder | newBuilder ( CallableOptions 프로토타입) |
CallableOptions.Builder | |
정적 호출 가능 옵션 | parsDelimitedFrom (InputStream 입력) |
정적 호출 가능 옵션 | parseDelimitedFrom (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 호출 가능 옵션 | ParseFrom (ByteBuffer 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 호출 가능 옵션 | ParseFrom (com.google.protobuf.CodedInputStream 입력) |
정적 호출 가능 옵션 | parseFrom (byte[] 데이터, com.google.protobuf.ExtensionRegistryLite 확장Registry) |
정적 호출 가능 옵션 | parsFrom (ByteBuffer 데이터) |
정적 호출 가능 옵션 | ParseFrom (com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 호출 가능 옵션 | ParseFrom (com.google.protobuf.ByteString 데이터) |
정적 호출 가능 옵션 | ParseFrom (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
정적 호출 가능 옵션 | ParseFrom (com.google.protobuf.ByteString 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry) |
공전 | 파서 () |
CallableOptions.Builder | toBuilder () |
무효의 | writeTo (com.google.protobuf.CodedOutputStream 출력) |
상속된 메서드
상수
공개 정적 최종 int FEED_DEVICES_FIELD_NUMBER
상수값: 6
공개 정적 최종 정수 FEED_FIELD_NUMBER
상수값: 1
공개 정적 최종 int FETCH_DEVICES_FIELD_NUMBER
상수값: 7
공개 정적 최종 int FETCH_FIELD_NUMBER
상수값: 2
공개 정적 최종 int FETCH_SKIP_SYNC_FIELD_NUMBER
상수값: 8
공개 정적 최종 int RUN_OPTIONS_FIELD_NUMBER
상수값: 4
공개 정적 최종 int TARGET_FIELD_NUMBER
상수값: 3
공개 정적 최종 int TENSOR_CONNECTION_FIELD_NUMBER
상수값: 5
공개 방법
공개 부울 containFeedDevices (문자열 키)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
공개 부울 containFetchDevices (문자열 키)
map<string, string> fetch_devices = 7;
공개 부울은 (객체 obj) 와 같습니다 .
공개 정적 최종 com.google.protobuf.Descriptors.Descriptor getDescriptor ()
공개 문자열 getFeed (int 인덱스)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
공개 com.google.protobuf.ByteString getFeedBytes (int 인덱스)
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
공개 int getFeedCount ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
공개 int getFeedDevicesCount ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
공개 맵<String, String> getFeedDevicesMap ()
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
공개 문자열 getFeedDevicesOrDefault (문자열 키, 문자열 defaultValue)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
공개 문자열 getFeedDevicesOrThrow (문자열 키)
The Tensor objects fed in the callable and fetched from the callable are expected to be backed by host (CPU) memory by default. The options below allow changing that - feeding tensors backed by device memory, or returning tensors that are backed by device memory. The maps below map the name of a feed/fetch tensor (which appears in 'feed' or 'fetch' fields above), to the fully qualified name of the device owning the memory backing the contents of the tensor. For example, creating a callable with the following options: CallableOptions { feed: "a:0" feed: "b:0" fetch: "x:0" fetch: "y:0" feed_devices: { "a:0": "/job:localhost/replica:0/task:0/device:GPU:0" } fetch_devices: { "y:0": "/job:localhost/replica:0/task:0/device:GPU:0" } } means that the Callable expects: - The first argument ("a:0") is a Tensor backed by GPU memory. - The second argument ("b:0") is a Tensor backed by host memory. and of its return values: - The first output ("x:0") will be backed by host memory. - The second output ("y:0") will be backed by GPU memory. FEEDS: It is the responsibility of the caller to ensure that the memory of the fed tensors will be correctly initialized and synchronized before it is accessed by operations executed during the call to Session::RunCallable(). This is typically ensured by using the TensorFlow memory allocators (Device::GetAllocator()) to create the Tensor to be fed. Alternatively, for CUDA-enabled GPU devices, this typically means that the operation that produced the contents of the tensor has completed, i.e., the CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or cuStreamSynchronize()).
map<string, string> feed_devices = 6;
공개 com.google.protobuf.ProtocolStringList getFeedList ()
Tensors to be fed in the callable. Each feed is the name of a tensor.
repeated string feed = 1;
공개 문자열 getFetch (int 인덱스)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
공개 com.google.protobuf.ByteString getFetchBytes (int 인덱스)
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
공개 int getFetchCount ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
공개 int getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
공용 Map<String, String> getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
공개 문자열 getFetchDevicesOrDefault (문자열 키, 문자열 defaultValue)
map<string, string> fetch_devices = 7;
공개 문자열 getFetchDevicesOrThrow (문자열 키)
map<string, string> fetch_devices = 7;
공개 com.google.protobuf.ProtocolStringList getFetchList ()
Fetches. A list of tensor names. The caller of the callable expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
repeated string fetch = 2;
공개 부울 getFetchSkipSync ()
By default, RunCallable() will synchronize the GPU stream before returning fetched tensors on a GPU device, to ensure that the values in those tensors have been produced. This simplifies interacting with the tensors, but potentially incurs a performance hit. If this options is set to true, the caller is responsible for ensuring that the values in the fetched tensors have been produced before they are used. The caller can do this by invoking `Device::Sync()` on the underlying device(s), or by feeding the tensors back to the same Session using `feed_devices` with the same corresponding device name.
bool fetch_skip_sync = 8;
공공의 getParserForType ()
공개 RunOptions getRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
공개 RunOptionsOrBuilder getRunOptionsOrBuilder ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
공개 int getSerializedSize ()
공개 문자열 getTarget (정수 인덱스)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
공개 com.google.protobuf.ByteString getTargetBytes (int 인덱스)
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
공개 int getTargetCount ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
공개 com.google.protobuf.ProtocolStringList getTargetList ()
Target Nodes. A list of node names. The named nodes will be run by the callable but their outputs will not be returned.
repeated string target = 3;
공개 TensorConnection getTensorConnection (int 인덱스)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
공개 int getTensorConnectionCount ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
공개 목록< TensorConnection > getTensorConnectionList ()
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
공개 TensorConnectionOrBuilder getTensorConnectionOrBuilder (int 인덱스)
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
공개 목록<? TensorConnectionOrBuilder > getTensorConnectionOrBuilderList () 를 확장합니다.
Tensors to be connected in the callable. Each TensorConnection denotes a pair of tensors in the graph, between which an edge will be created in the callable.
repeated .tensorflow.TensorConnection tensor_connection = 5;
공개 최종 com.google.protobuf.UnknownFieldSet getUnknownFields ()
공개 부울 hasRunOptions ()
Options that will be applied to each run.
.tensorflow.RunOptions run_options = 4;
공개 int hashCode ()
공개 최종 부울 isInitialized ()
공개 정적 CallableOptionsparseDelimitedFrom ( InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
IO예외 |
---|
공개 정적 CallableOptions parseFrom (ByteBuffer 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 CallableOptions 구문 분석 (byte[] 데이터, com.google.protobuf.ExtensionRegistryLite 확장Registry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 CallableOptionsparseFrom ( com.google.protobuf.CodedInputStream 입력, com.google.protobuf.ExtensionRegistryLite 확장Registry)
던지기
IO예외 |
---|
공개 정적 CallableOptions 구문 분석 (InputStream 입력, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
IO예외 |
---|
공개 정적 CallableOptionsparseFrom ( com.google.protobuf.ByteString 데이터, com.google.protobuf.ExtensionRegistryLite ExtensionRegistry)
던지기
잘못된프로토콜버퍼예외 |
---|
공개 정적 파서 ()
공개 무효 writeTo (com.google.protobuf.CodedOutputStream 출력)
던지기
IO예외 |
---|