CallableOptionsOrBuilder

सार्वजनिक इंटरफ़ेस CallableOptionsOrBuilder
ज्ञात अप्रत्यक्ष उपवर्ग

सार्वजनिक तरीके

अमूर्त बूलियन
includeFeedDevices (स्ट्रिंग कुंजी)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
अमूर्त बूलियन
includeFetchDevices (स्ट्रिंग कुंजी)
map<string, string> fetch_devices = 7;
सार स्ट्रिंग
getFeed (int सूचकांक)
 Tensors to be fed in the callable.
सार com.google.protobuf.ByteString
getFeedBytes (इंट इंडेक्स)
 Tensors to be fed in the callable.
सार इंट
getFeedCount ()
 Tensors to be fed in the callable.
सार मानचित्र<स्ट्रिंग, स्ट्रिंग>
getFeedDevices ()
इसके बजाय getFeedDevicesMap() उपयोग करें।
सार इंट
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
सार मानचित्र<स्ट्रिंग, स्ट्रिंग>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
सार स्ट्रिंग
getFeedDevicesOrDefault (स्ट्रिंग कुंजी, स्ट्रिंग defaultValue)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
सार स्ट्रिंग
getFeedDevicesOrThrow (स्ट्रिंग कुंजी)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
सार सूची<स्ट्रिंग>
getFeedList ()
 Tensors to be fed in the callable.
सार स्ट्रिंग
getFetch (int सूचकांक)
 Fetches.
सार com.google.protobuf.ByteString
getFetchBytes (int अनुक्रमणिका)
 Fetches.
सार इंट
getFetchCount ()
 Fetches.
सार मानचित्र<स्ट्रिंग, स्ट्रिंग>
getFetchDevices ()
इसके बजाय getFetchDevicesMap() उपयोग करें।
सार इंट
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
सार मानचित्र<स्ट्रिंग, स्ट्रिंग>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
सार स्ट्रिंग
getFetchDevicesOrDefault (स्ट्रिंग कुंजी, स्ट्रिंग defaultValue)
map<string, string> fetch_devices = 7;
सार स्ट्रिंग
getFetchDevicesOrThrow (स्ट्रिंग कुंजी)
map<string, string> fetch_devices = 7;
सार सूची<स्ट्रिंग>
getFetchList ()
 Fetches.
अमूर्त बूलियन
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
सार रनऑप्शंस
getRunOptions ()
 Options that will be applied to each run.
सार RunOptionsOrBuilder
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
सार स्ट्रिंग
लक्ष्य प्राप्त करें (int सूचकांक)
 Target Nodes.
सार com.google.protobuf.ByteString
getTargetBytes (int अनुक्रमणिका)
 Target Nodes.
सार इंट
सार सूची<स्ट्रिंग>
सार टेन्सरकनेक्शन
getTensorConnection (इंट इंडेक्स)
 Tensors to be connected in the callable.
सार इंट
getTensorConnectionCount ()
 Tensors to be connected in the callable.
सार सूची< TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
सार TensorConnectionOrBuilder
getTensorConnectionOrBuilder (int अनुक्रमणिका)
 Tensors to be connected in the callable.
सार सूची<? TensorConnectionOrBuilder > का विस्तार करता है
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
अमूर्त बूलियन
hasRunOptions ()
 Options that will be applied to each run.

सार्वजनिक तरीके

सार्वजनिक सार बूलियन में फ़ीडडिवाइस शामिल हैं (स्ट्रिंग कुंजी)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

सार्वजनिक सार बूलियन में FetchDevices (स्ट्रिंग कुंजी) शामिल है

map<string, string> fetch_devices = 7;

सार्वजनिक सार स्ट्रिंग getFeed (int सूचकांक)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

सार्वजनिक सार com.google.protobuf.ByteString getFeedBytes (int अनुक्रमणिका)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

सार्वजनिक सार int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

सार्वजनिक सार मानचित्र<स्ट्रिंग, स्ट्रिंग> getFeedDevices ()

इसके बजाय getFeedDevicesMap() उपयोग करें।

सार्वजनिक सार int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

सार्वजनिक सार मानचित्र<स्ट्रिंग, स्ट्रिंग> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

सार्वजनिक सार स्ट्रिंग getFeedDevicesOrDefault (स्ट्रिंग कुंजी, स्ट्रिंग defaultValue)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

सार्वजनिक सार स्ट्रिंग getFeedDevicesOrThrow (स्ट्रिंग कुंजी)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

सार्वजनिक सार सूची <स्ट्रिंग> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

सार्वजनिक सार स्ट्रिंग getFetch (int सूचकांक)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

सार्वजनिक सार com.google.protobuf.ByteString getFetchBytes (int अनुक्रमणिका)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

सार्वजनिक सार int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

सार्वजनिक सार मानचित्र<स्ट्रिंग, स्ट्रिंग> getFetchDevices ()

इसके बजाय getFetchDevicesMap() उपयोग करें।

सार्वजनिक सार int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

सार्वजनिक सार मानचित्र<स्ट्रिंग, स्ट्रिंग> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

सार्वजनिक सार स्ट्रिंग getFetchDevicesOrDefault (स्ट्रिंग कुंजी, स्ट्रिंग defaultValue)

map<string, string> fetch_devices = 7;

सार्वजनिक सार स्ट्रिंग getFetchDevicesOrThrow (स्ट्रिंग कुंजी)

map<string, string> fetch_devices = 7;

सार्वजनिक सार सूची <स्ट्रिंग> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

सार्वजनिक सार बूलियन getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

सार्वजनिक सार RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

सार्वजनिक सार RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

सार्वजनिक सार स्ट्रिंग getTarget (int सूचकांक)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

सार्वजनिक सार com.google.protobuf.ByteString getTargetBytes (int अनुक्रमणिका)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

सार्वजनिक सार int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

सार्वजनिक सार सूची <स्ट्रिंग> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

सार्वजनिक सार TensorConnection getTensorConnection (int अनुक्रमणिका)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

सार्वजनिक सार int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

सार्वजनिक सार सूची < TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

सार्वजनिक सार TensorConnectionOrBuilder getTensorConnectionOrBuilder (int अनुक्रमणिका)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

सार्वजनिक सार सूची<? TensorConnectionOrBuilder > getTensorConnectionOrBuilderList () का विस्तार करता है

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

सार्वजनिक सार बूलियन hasRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;