Returns an execution context backed by C++ runtime.
tff.framework.local_cpp_executor_factory(
*,
default_num_clients: int = 0,
max_concurrent_computation_calls: int = -1,
stream_structs: bool = False,
server_mesh: Optional[tf.experimental.dtensor.Mesh] = None,
client_mesh: Optional[tf.experimental.dtensor.Mesh] = None
) -> tff.framework.ExecutorFactory
Args |
default_num_clients
|
The number of clients to use as the default
cardinality, if thus number cannot be inferred by the arguments of a
computation.
|
max_concurrent_computation_calls
|
The maximum number of concurrent calls to
a single computation in the C++ runtime. If nonpositive, there is no
limit.
|
stream_structs
|
The flag to enable decomposing and streaming struct values.
|
server_mesh
|
If present, worker binary will create DTensor based executor on
the server side using given serialized DTensor Mesh.
|
client_mesh
|
If present, worker binary will create DTensor based executor on
the client side using given serialized DTensor Mesh.
|
Raises |
RuntimeError
|
If an internal C++ worker binary can not be found.
|