tensorflow::ClientSession

#include <client_session.h>

A ClientSession object lets the caller drive the evaluation of the TensorFlow graph constructed with the C++ API.

Summary

Example:

Scope root = Scope::NewRootScope();
auto a = Placeholder(root, DT_INT32);
auto c = Add(root, a, {41});

ClientSession session(root);
std::vector outputs;

Status s = session.Run({ {a, {1} } }, {c}, &outputs);
if (!s.ok()) { ... }  

Constructors and Destructors

ClientSession(const Scope & scope, const string & target)
Create a new session to evaluate the graph contained in scope by connecting to the TensorFlow runtime specified by target.
ClientSession(const Scope & scope)
Same as above, but use the empty string ("") as the target specification.
ClientSession(const Scope & scope, const SessionOptions & session_options)
Create a new session, configuring it with session_options.
~ClientSession()

Public types

CallableHandle typedef
int64
A handle to a subgraph, created with ClientSession::MakeCallable().
FeedType typedef
std::unordered_map< Output, Input::Initializer, OutputHash >
A data type to represent feeds to a Run call.

Public functions

MakeCallable(const CallableOptions & callable_options, CallableHandle *out_handle)
Status
Creates a handle for invoking the subgraph defined by callable_options.
ReleaseCallable(CallableHandle handle)
Status
Releases resources associated with the given handle in this session.
Run(const std::vector< Output > & fetch_outputs, std::vector< Tensor > *outputs) const
Status
Evaluate the tensors in fetch_outputs.
Run(const FeedType & inputs, const std::vector< Output > & fetch_outputs, std::vector< Tensor > *outputs) const
Status
Same as above, but use the mapping in inputs as feeds.
Run(const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs) const
Status
Same as above. Additionally runs the operations ins run_outputs.
Run(const RunOptions & run_options, const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs, RunMetadata *run_metadata) const
Status
Use run_options to turn on performance profiling.
Run(const RunOptions & run_options, const FeedType & inputs, const std::vector< Output > & fetch_outputs, const std::vector< Operation > & run_outputs, std::vector< Tensor > *outputs, RunMetadata *run_metadata, const thread::ThreadPoolOptions & threadpool_options) const
Status
Same as above.
RunCallable(CallableHandle handle, const std::vector< Tensor > & feed_tensors, std::vector< Tensor > *fetch_tensors, RunMetadata *run_metadata)
Status
Invokes the subgraph named by handle with the given options and input tensors.
RunCallable(CallableHandle handle, const std::vector< Tensor > & feed_tensors, std::vector< Tensor > *fetch_tensors, RunMetadata *run_metadata, const thread::ThreadPoolOptions & options)
Status
Invokes the subgraph named by handle with the given options and input tensors.

Public types

CallableHandle

int64 CallableHandle

A handle to a subgraph, created with ClientSession::MakeCallable().

FeedType

std::unordered_map< Output, Input::Initializer, OutputHash > FeedType

A data type to represent feeds to a Run call.

This is a map of Output objects returned by op-constructors to the value to feed them with. See Input::Initializer for details on what can be used as feed values.

Public functions

ClientSession

 ClientSession(
  const Scope & scope,
  const string & target
)

Create a new session to evaluate the graph contained in scope by connecting to the TensorFlow runtime specified by target.

ClientSession

 ClientSession(
  const Scope & scope
)

Same as above, but use the empty string ("") as the target specification.

ClientSession

 ClientSession(
  const Scope & scope,
  const SessionOptions & session_options
)

Create a new session, configuring it with session_options.

MakeCallable

Status MakeCallable(
  const CallableOptions & callable_options,
  CallableHandle *out_handle
)

Creates a handle for invoking the subgraph defined by callable_options.

NOTE: This API is still experimental and may change.

ReleaseCallable

Status ReleaseCallable(
  CallableHandle handle
)

Releases resources associated with the given handle in this session.

NOTE: This API is still experimental and may change.

Run

Status Run(
  const std::vector< Output > & fetch_outputs,
  std::vector< Tensor > *outputs
) const 

Evaluate the tensors in fetch_outputs.

The values are returned as Tensor objects in outputs. The number and order of outputs will match fetch_outputs.

Run

Status Run(
  const FeedType & inputs,
  const std::vector< Output > & fetch_outputs,
  std::vector< Tensor > *outputs
) const 

Same as above, but use the mapping in inputs as feeds.

Run

Status Run(
  const FeedType & inputs,
  const std::vector< Output > & fetch_outputs,
  const std::vector< Operation > & run_outputs,
  std::vector< Tensor > *outputs
) const 

Same as above. Additionally runs the operations ins run_outputs.

Run

Status Run(
  const RunOptions & run_options,
  const FeedType & inputs,
  const std::vector< Output > & fetch_outputs,
  const std::vector< Operation > & run_outputs,
  std::vector< Tensor > *outputs,
  RunMetadata *run_metadata
) const 

Use run_options to turn on performance profiling.

run_metadata, if not null, is filled in with the profiling results.

Run

Status Run(
  const RunOptions & run_options,
  const FeedType & inputs,
  const std::vector< Output > & fetch_outputs,
  const std::vector< Operation > & run_outputs,
  std::vector< Tensor > *outputs,
  RunMetadata *run_metadata,
  const thread::ThreadPoolOptions & threadpool_options
) const 

Same as above.

Additionally allows user to provide custom threadpool implementation via ThreadPoolOptions.

RunCallable

Status RunCallable(
  CallableHandle handle,
  const std::vector< Tensor > & feed_tensors,
  std::vector< Tensor > *fetch_tensors,
  RunMetadata *run_metadata
)

Invokes the subgraph named by handle with the given options and input tensors.

The order of tensors in feed_tensors must match the order of names in CallableOptions::feed() and the order of tensors in fetch_tensors will match the order of names in CallableOptions::fetch() when this subgraph was created. NOTE: This API is still experimental and may change.

RunCallable

Status RunCallable(
  CallableHandle handle,
  const std::vector< Tensor > & feed_tensors,
  std::vector< Tensor > *fetch_tensors,
  RunMetadata *run_metadata,
  const thread::ThreadPoolOptions & options
)

Invokes the subgraph named by handle with the given options and input tensors.

The order of tensors in feed_tensors must match the order of names in CallableOptions::feed() and the order of tensors in fetch_tensors will match the order of names in CallableOptions::fetch() when this subgraph was created. NOTE: This API is still experimental and may change.

~ClientSession

 ~ClientSession()