CallableOptionsOrBuilder

genel arayüz CallableOptionsOrBuilder
Bilinen Dolaylı Alt Sınıflar

Genel Yöntemler

soyut boole
FeedDevices (Dize anahtarı) içerir
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
soyut boole
içerirFetchDevices (Dize anahtarı)
map<string, string> fetch_devices = 7;
soyut Dize
getFeed (int dizini)
 Tensors to be fed in the callable.
abstract com.google.protobuf.ByteString
getFeedBytes (int dizini)
 Tensors to be fed in the callable.
soyut int
getFeedCount ()
 Tensors to be fed in the callable.
soyut Harita<Dize, Dize>
getFeedDevices ()
Bunun yerine getFeedDevicesMap() kullanın.
soyut int
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
soyut Harita<Dize, Dize>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
soyut Dize
getFeedDevicesOrDefault (Dize anahtarı, Dize defaultValue)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
soyut Dize
getFeedDevicesOrThrow (Dize tuşu)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
özet Liste<String>
getFeedList ()
 Tensors to be fed in the callable.
soyut Dize
getFetch (int dizini)
 Fetches.
abstract com.google.protobuf.ByteString
getFetchBytes (int dizini)
 Fetches.
soyut int
getFetchCount ()
 Fetches.
soyut Harita<Dize, Dize>
getFetchDevices ()
Bunun yerine getFetchDevicesMap() işlevini kullanın.
soyut int
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
soyut Harita<Dize, Dize>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
soyut Dize
getFetchDevicesOrDefault (Dize anahtarı, Dize defaultValue)
map<string, string> fetch_devices = 7;
soyut Dize
getFetchDevicesOrThrow (Dize tuşu)
map<string, string> fetch_devices = 7;
özet Liste<String>
getFetchList ()
 Fetches.
soyut boole
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
özet Çalıştırma Seçenekleri
getRunOptions ()
 Options that will be applied to each run.
özet RunOptionsOrBuilder
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
soyut Dize
getTarget (int dizini)
 Target Nodes.
abstract com.google.protobuf.ByteString
getTargetBytes (int dizini)
 Target Nodes.
soyut int
getTargetCount ()
 Target Nodes.
özet Liste<String>
getTargetList ()
 Target Nodes.
soyut TensorConnection
getTensorConnection (int dizini)
 Tensors to be connected in the callable.
soyut int
getTensorConnectionCount ()
 Tensors to be connected in the callable.
özet Liste< TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
özet TensorConnectionOrBuilder
getTensorConnectionOrBuilder (int dizini)
 Tensors to be connected in the callable.
Özet Liste<? TensorConnectionOrBuilder'ı genişletiyor >
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
soyut boole
hasRunOptions ()
 Options that will be applied to each run.

Genel Yöntemler

genel soyut boolean FeedDevices (Dize anahtarı) içerir

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

genel soyut boole FetchDevices (Dize anahtarı) içerir

map<string, string> fetch_devices = 7;

genel özet String getFeed (int indeksi)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

genel özet com.google.protobuf.ByteString getFeedBytes (int dizini)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

genel özet int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

genel özet Harita<String, String> getFeedDevices ()

Bunun yerine getFeedDevicesMap() kullanın.

genel özet int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

genel soyut Harita<String, String> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

genel özet String getFeedDevicesOrDefault (Dize anahtarı, String defaultValue)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

genel özet String getFeedDevicesOrThrow (Dize anahtarı)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

genel özet Listesi<String> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

genel özet String getFetch (int indeksi)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

genel özet com.google.protobuf.ByteString getFetchBytes (int dizini)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

genel özet int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

genel soyut Harita<String, String> getFetchDevices ()

Bunun yerine getFetchDevicesMap() işlevini kullanın.

genel özet int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

genel soyut Harita<String, String> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

genel özet String getFetchDevicesOrDefault (Dize anahtarı, String defaultValue)

map<string, string> fetch_devices = 7;

genel özet String getFetchDevicesOrThrow (String anahtarı)

map<string, string> fetch_devices = 7;

genel özet Listesi<String> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

genel soyut boolean getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

genel özet RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

genel özet RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

genel özet String getTarget (int indeksi)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

genel özet com.google.protobuf.ByteString getTargetBytes (int dizini)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

genel özet int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

genel özet Listesi<String> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

genel özet TensorConnection getTensorConnection (int dizini)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

genel özet int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

genel özet Listesi< TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

genel özet TensorConnectionOrBuilder getTensorConnectionOrBuilder (int dizini)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

genel özet listesi<? TensorConnectionOrBuilder'ı genişletir > getTensorConnectionOrBuilderList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

genel soyut boolean hasRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;