InterpreterApi.Options

public static class InterpreterApi.Options
Known Direct Subclasses

An options class for controlling runtime interpreter behavior.

Nested Classes

enum InterpreterApi.Options.TfLiteRuntime Enum to represent where to get the TensorFlow Lite runtime implementation from. 

Public Constructors

Public Methods

InterpreterApi.Options
addDelegate(Delegate delegate)
Adds a Delegate to be applied during interpreter creation.
InterpreterApi.Options
addDelegateFactory(DelegateFactory delegateFactory)
Adds a DelegateFactory which will be invoked to apply its created Delegate during interpreter creation.
ValidatedAccelerationConfig
getAccelerationConfig()
Return the acceleration configuration.
List<DelegateFactory>
getDelegateFactories()
Returns the list of delegate factories that have been registered via addDelegateFactory).
List<Delegate>
getDelegates()
Returns the list of delegates intended to be applied during interpreter creation that have been registered via addDelegate.
int
getNumThreads()
Returns the number of threads to be used for ops that support multi-threading.
InterpreterApi.Options.TfLiteRuntime
getRuntime()
Return where to get the TF Lite runtime implementation from.
boolean
getUseNNAPI()
Returns whether to use NN API (if available) for op execution.
boolean
boolean
isCancellable()
Advanced: Returns whether the interpreter is able to be cancelled.
InterpreterApi.Options
setAccelerationConfig(ValidatedAccelerationConfig config)
Specify the acceleration configuration.
InterpreterApi.Options
setCancellable(boolean allow)
Advanced: Set if the interpreter is able to be cancelled.
InterpreterApi.Options
setNumThreads(int numThreads)
Sets the number of threads to be used for ops that support multi-threading.
InterpreterApi.Options
setRuntime(InterpreterApi.Options.TfLiteRuntime runtime)
Specify where to get the TF Lite runtime implementation from.
InterpreterApi.Options
setUseNNAPI(boolean useNNAPI)
Sets whether to use NN API (if available) for op execution.
InterpreterApi.Options
setUseXNNPACK(boolean useXNNPACK)
Enable or disable an optimized set of CPU kernels (provided by XNNPACK).

Inherited Methods

Public Constructors

public Options ()

public Options (InterpreterApi.Options other)

Parameters
other

Public Methods

public InterpreterApi.Options addDelegate (Delegate delegate)

Adds a Delegate to be applied during interpreter creation.

Delegates added here are applied before any delegates created from a DelegateFactory that was added with addDelegateFactory(DelegateFactory).

Note that TF Lite in Google Play Services (see setRuntime(InterpreterApi.Options.TfLiteRuntime)) does not support external (developer-provided) delegates, and adding a Delegate other than ERROR(/NnApiDelegate) here is not allowed when using TF Lite in Google Play Services.

Parameters
delegate

public InterpreterApi.Options addDelegateFactory (DelegateFactory delegateFactory)

Adds a DelegateFactory which will be invoked to apply its created Delegate during interpreter creation.

Delegates from a delegated factory that was added here are applied after any delegates added with addDelegate(Delegate).

Parameters
delegateFactory

public ValidatedAccelerationConfig getAccelerationConfig ()

Return the acceleration configuration.

public List<DelegateFactory> getDelegateFactories ()

Returns the list of delegate factories that have been registered via addDelegateFactory).

public List<Delegate> getDelegates ()

Returns the list of delegates intended to be applied during interpreter creation that have been registered via addDelegate.

public int getNumThreads ()

Returns the number of threads to be used for ops that support multi-threading.

numThreads should be &gt;= -1. Values of 0 (or 1) disable multithreading. Default value is -1: the number of threads used will be implementation-defined and platform-dependent.

public InterpreterApi.Options.TfLiteRuntime getRuntime ()

Return where to get the TF Lite runtime implementation from.

public boolean getUseNNAPI ()

Returns whether to use NN API (if available) for op execution. Default value is false (disabled).

public boolean getUseXNNPACK ()

public boolean isCancellable ()

Advanced: Returns whether the interpreter is able to be cancelled.

Interpreters may have an experimental API setCancelled(boolean). If this interpreter is cancellable and such a method is invoked, a cancellation flag will be set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state until explicitly "uncancelled" by setCancelled(false).

public InterpreterApi.Options setAccelerationConfig (ValidatedAccelerationConfig config)

Specify the acceleration configuration.

Parameters
config

public InterpreterApi.Options setCancellable (boolean allow)

Advanced: Set if the interpreter is able to be cancelled.

Interpreters may have an experimental API setCancelled(boolean). If this interpreter is cancellable and such a method is invoked, a cancellation flag will be set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state until explicitly "uncancelled" by setCancelled(false).

Parameters
allow

public InterpreterApi.Options setNumThreads (int numThreads)

Sets the number of threads to be used for ops that support multi-threading.

numThreads should be &gt;= -1. Setting numThreads to 0 has the effect of disabling multithreading, which is equivalent to setting numThreads to 1. If unspecified, or set to the value -1, the number of threads used will be implementation-defined and platform-dependent.

Parameters
numThreads

public InterpreterApi.Options setRuntime (InterpreterApi.Options.TfLiteRuntime runtime)

Specify where to get the TF Lite runtime implementation from.

Parameters
runtime

public InterpreterApi.Options setUseNNAPI (boolean useNNAPI)

Sets whether to use NN API (if available) for op execution. Defaults to false (disabled).

Parameters
useNNAPI

public InterpreterApi.Options setUseXNNPACK (boolean useXNNPACK)

Enable or disable an optimized set of CPU kernels (provided by XNNPACK). Enabled by default.

Parameters
useXNNPACK