CallableOptionsOrBuilder

interface pública CallableOptionsOrBuilder
Subclasses indiretas conhecidas

Métodos Públicos

booleano abstrato
contémFeedDevices (chave de string)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
booleano abstrato
contémFetchDevices (chave de string)
map<string, string> fetch_devices = 7;
cadeia abstrata
getFeed (índice interno)
 Tensors to be fed in the callable.
abstrato com.google.protobuf.ByteString
getFeedBytes (índice interno)
 Tensors to be fed in the callable.
abstrato int
getFeedCount ()
 Tensors to be fed in the callable.
mapa abstrato<String, String>
getFeedDevices ()
Use getFeedDevicesMap() em vez disso.
abstrato int
getFeedDevicesCount ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
mapa abstrato<String, String>
getFeedDevicesMap ()
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
cadeia abstrata
getFeedDevicesOrDefault (chave de string, string defaultValue)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
cadeia abstrata
getFeedDevicesOrThrow (chave de string)
 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
lista abstrata<String>
getFeedList ()
 Tensors to be fed in the callable.
cadeia abstrata
getFetch (índice interno)
 Fetches.
abstrato com.google.protobuf.ByteString
getFetchBytes (índice interno)
 Fetches.
abstrato int
getFetchCount ()
 Fetches.
mapa abstrato<String, String>
abstrato int
getFetchDevicesCount ()
map<string, string> fetch_devices = 7;
mapa abstrato<String, String>
getFetchDevicesMap ()
map<string, string> fetch_devices = 7;
cadeia abstrata
getFetchDevicesOrDefault (chave de string, string defaultValue)
map<string, string> fetch_devices = 7;
cadeia abstrata
getFetchDevicesOrThrow (chave de string)
map<string, string> fetch_devices = 7;
lista abstrata<String>
getFetchList ()
 Fetches.
booleano abstrato
getFetchSkipSync ()
 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced.
opções de execução abstratas
getRunOptions ()
 Options that will be applied to each run.
RunOptionsOrBuilder abstrato
getRunOptionsOrBuilder ()
 Options that will be applied to each run.
cadeia abstrata
getTarget (índice interno)
 Target Nodes.
abstrato com.google.protobuf.ByteString
getTargetBytes (índice interno)
 Target Nodes.
abstrato int
getTargetCount ()
 Target Nodes.
lista abstrata<String>
getTargetList ()
 Target Nodes.
TensorConnection abstrato
getTensorConnection (índice interno)
 Tensors to be connected in the callable.
abstrato int
getTensorConnectionCount ()
 Tensors to be connected in the callable.
Lista abstrata< TensorConnection >
getTensorConnectionList ()
 Tensors to be connected in the callable.
TensorConnectionOrBuilder abstrato
getTensorConnectionOrBuilder (índice interno)
 Tensors to be connected in the callable.
lista abstrata<? estende TensorConnectionOrBuilder >
getTensorConnectionOrBuilderList ()
 Tensors to be connected in the callable.
booleano abstrato
hasRunOptions ()
 Options that will be applied to each run.

Métodos Públicos

público abstrato booleano contémFeedDevices (chave String)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

público abstrato booleano contémFetchDevices (chave String)

map<string, string> fetch_devices = 7;

string abstrata pública getFeed (índice int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

resumo público com.google.protobuf.ByteString getFeedBytes (índice int)

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

público abstrato int getFeedCount ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

mapa abstrato público<String, String> getFeedDevices ()

Use getFeedDevicesMap() em vez disso.

resumo público int getFeedDevicesCount ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

mapa abstrato público<String, String> getFeedDevicesMap ()

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

string abstrata pública getFeedDevicesOrDefault (chave de string, string defaultValue)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

string abstrata pública getFeedDevicesOrThrow (chave de string)

 The Tensor objects fed in the callable and fetched from the callable
 are expected to be backed by host (CPU) memory by default.
 The options below allow changing that - feeding tensors backed by
 device memory, or returning tensors that are backed by device memory.
 The maps below map the name of a feed/fetch tensor (which appears in
 'feed' or 'fetch' fields above), to the fully qualified name of the device
 owning the memory backing the contents of the tensor.
 For example, creating a callable with the following options:
 CallableOptions {
   feed: "a:0"
   feed: "b:0"
   fetch: "x:0"
   fetch: "y:0"
   feed_devices: {
     "a:0": "/job:localhost/replica:0/task:0/device:GPU:0"
   }
   fetch_devices: {
     "y:0": "/job:localhost/replica:0/task:0/device:GPU:0"
  }
 }
 means that the Callable expects:
 - The first argument ("a:0") is a Tensor backed by GPU memory.
 - The second argument ("b:0") is a Tensor backed by host memory.
 and of its return values:
 - The first output ("x:0") will be backed by host memory.
 - The second output ("y:0") will be backed by GPU memory.
 FEEDS:
 It is the responsibility of the caller to ensure that the memory of the fed
 tensors will be correctly initialized and synchronized before it is
 accessed by operations executed during the call to Session::RunCallable().
 This is typically ensured by using the TensorFlow memory allocators
 (Device::GetAllocator()) to create the Tensor to be fed.
 Alternatively, for CUDA-enabled GPU devices, this typically means that the
 operation that produced the contents of the tensor has completed, i.e., the
 CUDA stream has been synchronized (e.g., via cuCtxSynchronize() or
 cuStreamSynchronize()).
 
map<string, string> feed_devices = 6;

lista abstrata pública<String> getFeedList ()

 Tensors to be fed in the callable. Each feed is the name of a tensor.
 
repeated string feed = 1;

string abstrata pública getFetch (índice int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

resumo público com.google.protobuf.ByteString getFetchBytes (índice int)

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

resumo público int getFetchCount ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

mapa abstrato público<String, String> getFetchDevices ()

Use getFetchDevicesMap() em vez disso.

resumo público int getFetchDevicesCount ()

map<string, string> fetch_devices = 7;

mapa abstrato público<String, String> getFetchDevicesMap ()

map<string, string> fetch_devices = 7;

string abstrata pública getFetchDevicesOrDefault (chave de string, string defaultValue)

map<string, string> fetch_devices = 7;

string abstrata pública getFetchDevicesOrThrow (chave de string)

map<string, string> fetch_devices = 7;

lista abstrata pública<String> getFetchList ()

 Fetches. A list of tensor names. The caller of the callable expects a
 tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The
 order of specified fetches does not change the execution order.
 
repeated string fetch = 2;

público abstrato booleano getFetchSkipSync ()

 By default, RunCallable() will synchronize the GPU stream before returning
 fetched tensors on a GPU device, to ensure that the values in those tensors
 have been produced. This simplifies interacting with the tensors, but
 potentially incurs a performance hit.
 If this options is set to true, the caller is responsible for ensuring
 that the values in the fetched tensors have been produced before they are
 used. The caller can do this by invoking `Device::Sync()` on the underlying
 device(s), or by feeding the tensors back to the same Session using
 `feed_devices` with the same corresponding device name.
 
bool fetch_skip_sync = 8;

público abstrato RunOptions getRunOptions ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

público abstrato RunOptionsOrBuilder getRunOptionsOrBuilder ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;

String abstrata pública getTarget (índice int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

resumo público com.google.protobuf.ByteString getTargetBytes (índice int)

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

resumo público int getTargetCount ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

lista abstrata pública<String> getTargetList ()

 Target Nodes. A list of node names. The named nodes will be run by the
 callable but their outputs will not be returned.
 
repeated string target = 3;

público abstrato TensorConnection getTensorConnection (índice int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

público abstrato int getTensorConnectionCount ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

lista abstrata pública< TensorConnection > getTensorConnectionList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

público abstrato TensorConnectionOrBuilder getTensorConnectionOrBuilder (índice int)

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

lista abstrata pública<? estende TensorConnectionOrBuilder > getTensorConnectionOrBuilderList ()

 Tensors to be connected in the callable. Each TensorConnection denotes
 a pair of tensors in the graph, between which an edge will be created
 in the callable.
 
repeated .tensorflow.TensorConnection tensor_connection = 5;

hasRunOptions booleano abstrato público ()

 Options that will be applied to each run.
 
.tensorflow.RunOptions run_options = 4;