tflite::
Interpreter
#include <interpreter.h>
An interpreter for a graph of nodes that input and output from tensors.
Summary
Each node of the graph processes a set of input tensors and produces a set of output Tensors. All inputs/output tensors are referenced by index.
Usage:
// Create model from file. Note that the model instance must outlive the
// interpreter instance.
auto model = tflite::FlatBufferModel::BuildFromFile(...);
if (model == nullptr) {
// Return error.
}
// Create an Interpreter with an InterpreterBuilder.
std::unique_ptr interpreter;
tflite::ops::builtin::BuiltinOpResolver resolver;
if (InterpreterBuilder(*model, resolver)(&interpreter) != kTfLiteOk) {
// Return failure.
}
interpreter->AllocateTensors();
auto input = interpreter->typed_tensor(0);
for (int i = 0; i < input_size; i++) {
input[i] = ...; interpreter.Invoke();
Note: for nearly all practical use cases, one should not directly construct an Interpreter object, but rather use the InterpreterBuilder .
Constructors and Destructors |
|
---|---|
Interpreter
(
ErrorReporter
*error_reporter)
|
|
Interpreter
(const
Interpreter
&)
|
|
~Interpreter
()
|
Public types |
|
---|---|
TfLiteDelegatePtr
|
using
std::unique_ptr< TfLiteDelegate, void(*)(TfLiteDelegate *)>
|
Public static attributes |
|
---|---|
kTensorsCapacityHeadroom
= 16
|
constexpr int
The capacity headroom of
tensors_
vector before calling ops'
prepare
and
invoke
function.
|
kTensorsReservedCapacity
= 128
|
constexpr int
|
Friend classes |
|
---|---|
tflite::InterpreterTest
|
friend class
|
tflite::TestDelegate
|
friend class
|
tflite::delegates::InterpreterUtils
|
friend class
|
Public functions |
|
---|---|
AllocateTensors
()
|
TfLiteStatus
|
EnsureTensorDataIsReadable
(int tensor_index)
|
TfLiteStatus
Ensure the data in
tensor.data
is readable.
|
GetAllowFp16PrecisionForFp32
() const
|
bool
Get the half precision flag.
|
GetBufferHandle
(int tensor_index, TfLiteBufferHandle *buffer_handle, TfLiteDelegate **delegate)
|
TfLiteStatus
Get the delegate buffer handle, and the delegate which can process the buffer handle.
|
GetInputName
(int index) const
|
const char *
Return the name of a given input.
|
GetOutputName
(int index) const
|
const char *
Return the name of a given output.
|
GetProfiler
()
|
Profiler *
Gets the profiler used for op tracing.
|
Invoke
()
|
TfLiteStatus
Invoke the interpreter (run the whole graph in dependency order).
|
ModifyGraphWithDelegate
(TfLiteDelegate *delegate)
|
TfLiteStatus
Allow a delegate to look at the graph and modify the graph to handle parts of the graph themselves.
|
ModifyGraphWithDelegate
(std::unique_ptr< Delegate, Deleter > delegate)
|
TfLiteStatus
Same as ModifyGraphWithDelegate except this interpreter takes ownership of the provided delegate.
|
ModifyGraphWithDelegate
(std::unique_ptr< TfLiteDelegate > delegate)=delete
|
TfLiteStatus
This overload is
never
OK.
|
OpProfilingString
(const TfLiteRegistration & op_reg, const TfLiteNode *node) const
|
const char *
Retrieve an operator's description of its work, for profiling purposes.
|
ReleaseNonPersistentMemory
()
|
TfLiteStatus
WARNING: Experimental interface, subject to change.
|
ResetVariableTensors
()
|
TfLiteStatus
Reset all variable tensors to the default value.
|
ResizeInputTensor
(int tensor_index, const std::vector< int > & dims)
|
TfLiteStatus
Change the dimensionality of a given tensor.
|
ResizeInputTensorStrict
(int tensor_index, const std::vector< int > & dims)
|
TfLiteStatus
A call to AllocateTensors() is required to change the tensor input buffer.
|
SetAllowBufferHandleOutput
(bool allow_buffer_handle_output)
|
void
Set if buffer handle output is allowed.
|
SetAllowFp16PrecisionForFp32
(bool allow)
|
void
Allow float16 precision for FP32 calculation when possible.
|
SetBufferHandle
(int tensor_index, TfLiteBufferHandle buffer_handle, TfLiteDelegate *delegate)
|
TfLiteStatus
Set the delegate buffer handle to a tensor.
|
SetCancellationFunction
(void *data, bool(*)(void *) check_cancelled_func)
|
void
Sets the cancellation function pointer in order to cancel a request in the middle of a call to
Invoke()
.
|
SetCustomAllocationForTensor
(int tensor_index, const TfLiteCustomAllocation & allocation)
|
TfLiteStatus
|
SetExecutionPlan
(const std::vector< int > & new_plan)
|
TfLiteStatus
WARNING: Experimental interface, subject to change Overrides execution plan.
|
SetExternalContext
(TfLiteExternalContextType type, TfLiteExternalContext *ctx)
|
void
|
SetNumThreads
(int num_threads)
|
TfLiteStatus
Set the number of threads available to the interpreter.
|
SetProfiler
(Profiler *profiler)
|
void
Sets the profiler to tracing execution.
|
SetProfiler
(std::unique_ptr< Profiler > profiler)
|
void
Same as SetProfiler except this interpreter takes ownership of the provided profiler.
|
UseNNAPI
(bool enable)
|
void
Enable or disable NNAPI (true to enable).
|
execution_plan
() const
|
const std::vector< int > &
WARNING: Experimental interface, subject to change.
|
input_tensor
(size_t index)
|
TfLiteTensor *
Return a mutable pointer to the given input tensor.
|
input_tensor
(size_t index) const
|
const TfLiteTensor *
Return an immutable pointerto the given input tensor.
|
inputs
() const
|
const std::vector< int > &
Read only access to list of inputs.
|
node_and_registration
(int node_index) const
|
const std::pair< TfLiteNode, TfLiteRegistration > *
Get a pointer to an operation and registration data structure if in bounds.
|
nodes_size
() const
|
size_t
Return the number of ops in the model.
|
operator=
(const
Interpreter
&)=delete
|
|
output_tensor
(size_t index)
|
TfLiteTensor *
Return a mutable pointer to the given output tensor.
|
output_tensor
(size_t index) const
|
const TfLiteTensor *
Return an immutable pointer to the given output tensor.
|
outputs
() const
|
const std::vector< int > &
Read only access to list of outputs.
|
tensor
(int tensor_index)
|
TfLiteTensor *
Get a mutable tensor data structure.
|
tensor
(int tensor_index) const
|
const TfLiteTensor *
Get an immutable tensor data structure.
|
tensors_size
() const
|
size_t
Return the number of tensors in the model.
|
typed_input_tensor
(int index)
|
T *
Return a mutable pointer into the data of a given input tensor.
|
typed_input_tensor
(int index) const
|
const T *
Return an immutable pointer into the data of a given input tensor.
|
typed_output_tensor
(int index)
|
T *
Return a mutable pointer into the data of a given output tensor.
|
typed_output_tensor
(int index) const
|
const T *
Return an immutable pointer into the data of a given output tensor.
|
typed_tensor
(int tensor_index)
|
T *
Perform a checked cast to the appropriate tensor type (mutable pointer version).
|
typed_tensor
(int tensor_index) const
|
const T *
Perform a checked cast to the appropriate tensor type (immutable pointer version).
|
variables
() const
|
const std::vector< int > &
Read only access to list of variable tensors.
|
Public types
TfLiteDelegatePtr
std::unique_ptr< TfLiteDelegate, void(*)(TfLiteDelegate *)> TfLiteDelegatePtr
Public static attributes
kTensorsCapacityHeadroom
constexpr int kTensorsCapacityHeadroom = 16
The capacity headroom of
tensors_
vector before calling ops'
prepare
and
invoke
function.
In these functions, it's guaranteed allocating up to
kTensorsCapacityHeadroom
more tensors won't invalidate pointers to existing tensors.
kTensorsReservedCapacity
constexpr int kTensorsReservedCapacity = 128
Friend classes
tflite::InterpreterTest
friend class tflite::InterpreterTest
tflite::TestDelegate
friend class tflite::TestDelegate
tflite::delegates::InterpreterUtils
friend class tflite::delegates::InterpreterUtils
Public functions
AllocateTensors
TfLiteStatus AllocateTensors()
EnsureTensorDataIsReadable
TfLiteStatus EnsureTensorDataIsReadable( int tensor_index )
Ensure the data in
tensor.data
is readable.
In case delegate is used, it might require to copy the data from delegate buffer to raw memory. WARNING: This is an experimental API and subject to change.
GetAllowFp16PrecisionForFp32
bool GetAllowFp16PrecisionForFp32() const
Get the half precision flag.
WARNING: This is an experimental API and subject to change.
GetBufferHandle
TfLiteStatus GetBufferHandle( int tensor_index, TfLiteBufferHandle *buffer_handle, TfLiteDelegate **delegate )
Get the delegate buffer handle, and the delegate which can process the buffer handle.
WARNING: This is an experimental API and subject to change.
GetInputName
const char * GetInputName( int index ) const
Return the name of a given input.
The given index must be between 0 and inputs() .size().
GetOutputName
const char * GetOutputName( int index ) const
Return the name of a given output.
The given index must be between 0 and outputs() .size().
GetProfiler
Profiler * GetProfiler()
Gets the profiler used for op tracing.
WARNING: This is an experimental API and subject to change.
Invoke
TfLiteStatus Invoke()
Invoke the interpreter (run the whole graph in dependency order).
NOTE: It is possible that the interpreter is not in a ready state to evaluate (i.e. if a ResizeTensor() has been performed without an AllocateTensors(). Returns status of success or failure.
ModifyGraphWithDelegate
TfLiteStatus ModifyGraphWithDelegate( TfLiteDelegate *delegate )
Allow a delegate to look at the graph and modify the graph to handle parts of the graph themselves.
After this is called, the graph may contain new nodes that replace 1 more nodes. 'delegate' must outlive the interpreter. Returns one of the following three status codes:
- kTfLiteOk: Success.
- kTfLiteDelegateError: Delegation failed due to an error in the delegate. The Interpreter has been restored to its pre-delegation state. NOTE: This undoes all delegates previously applied to the Interpreter .
- kTfLiteApplicationError : Delegation failed to be applied due to the incompatibility with the TfLite runtime, e.g., the model graph is already immutable when applying the delegate. However, the interpreter could still be invoked.
- kTfLiteError: Unexpected/runtime failure. WARNING: This is an experimental API and subject to change.
ModifyGraphWithDelegate
TfLiteStatus ModifyGraphWithDelegate( std::unique_ptr< Delegate, Deleter > delegate )
Same as ModifyGraphWithDelegate except this interpreter takes ownership of the provided delegate.
WARNING: This is an experimental API and subject to change.
ModifyGraphWithDelegate
TfLiteStatus ModifyGraphWithDelegate( std::unique_ptr< TfLiteDelegate > delegate )=delete
This overload is never OK.
TfLiteDelegate is a C structure, so it has no virtual destructor. The default deleter of the unique_ptr does not know how to delete C++ objects deriving from TfLiteDelegate.
OpProfilingString
const char * OpProfilingString( const TfLiteRegistration & op_reg, const TfLiteNode *node ) const
Retrieve an operator's description of its work, for profiling purposes.
ReleaseNonPersistentMemory
TfLiteStatus ReleaseNonPersistentMemory()
WARNING: Experimental interface, subject to change.
ResetVariableTensors
TfLiteStatus ResetVariableTensors()
Reset all variable tensors to the default value.
If a variable tensor doesn't have a buffer, reset it to zero. TODO(b/115961645): Implement - If a variable tensor has a buffer, reset it to the value of the buffer. WARNING: This is an experimental API and subject to change.
ResizeInputTensor
TfLiteStatus ResizeInputTensor( int tensor_index, const std::vector< int > & dims )
Change the dimensionality of a given tensor.
Note, this is only acceptable for tensor indices that are inputs or variables. Returns status of failure or success. Note that this doesn't actually resize any existing buffers. A call to AllocateTensors() is required to change the tensor input buffer.
ResizeInputTensorStrict
TfLiteStatus ResizeInputTensorStrict( int tensor_index, const std::vector< int > & dims )
A call to AllocateTensors() is required to change the tensor input buffer.
SetAllowBufferHandleOutput
void SetAllowBufferHandleOutput( bool allow_buffer_handle_output )
Set if buffer handle output is allowed.
When using hardware delegation,
Interpreter
will make the data of output tensors available in
tensor->data
by default. If the application can consume the buffer handle directly (e.g. reading output from OpenGL texture), it can set this flag to false, so
Interpreter
won't copy the data from buffer handle to CPU memory. WARNING: This is an experimental API and subject to change.
SetAllowFp16PrecisionForFp32
void SetAllowFp16PrecisionForFp32( bool allow )
Allow float16 precision for FP32 calculation when possible.
Default: not allow.
WARNING: This API is deprecated: prefer controlling this via delegate options, e.g. `tflite::StatefulNnApiDelegate::Options::allow_fp16' or
TfLiteGpuDelegateOptionsV2::is_precision_loss_allowed
. This method will be removed in a future release.
SetBufferHandle
TfLiteStatus SetBufferHandle( int tensor_index, TfLiteBufferHandle buffer_handle, TfLiteDelegate *delegate )
Set the delegate buffer handle to a tensor.
It can be called in the following cases:
- Set the buffer handle to a tensor that's not being written by a delegate. For example, feeding an OpenGL texture as the input of the inference graph.
- Set the buffer handle to a tensor that uses the same delegate. For example, set an OpenGL texture as the output of inference, while the node which produces output is an OpenGL delegate node. WARNING: This is an experimental API and subject to change.
SetCancellationFunction
void SetCancellationFunction( void *data, bool(*)(void *) check_cancelled_func )
Sets the cancellation function pointer in order to cancel a request in the middle of a call to Invoke() .
The interpreter queries this function during inference, between op invocations; when it returns true, the interpreter will abort execution and return
kTfLiteError
. The
data
parameter contains any data used by the cancellation function, and if non-null, remains owned by the caller. WARNING: This is an experimental API and subject to change.
SetCustomAllocationForTensor
TfLiteStatus SetCustomAllocationForTensor( int tensor_index, const TfLiteCustomAllocation & allocation )
SetExecutionPlan
TfLiteStatus SetExecutionPlan( const std::vector< int > & new_plan )
WARNING: Experimental interface, subject to change Overrides execution plan.
This bounds checks indices sent in.
SetExternalContext
void SetExternalContext( TfLiteExternalContextType type, TfLiteExternalContext *ctx )
SetNumThreads
TfLiteStatus SetNumThreads( int num_threads )
Set the number of threads available to the interpreter.
NOTE: num_threads should be >= -1. User may pass -1 to let the TFLite interpreter set the no of threads available to itself.
SetProfiler
void SetProfiler( Profiler *profiler )
Sets the profiler to tracing execution.
The caller retains ownership of the profiler and must ensure its validity. WARNING: This is an experimental API and subject to change.
SetProfiler
void SetProfiler( std::unique_ptr< Profiler > profiler )
Same as SetProfiler except this interpreter takes ownership of the provided profiler.
WARNING: This is an experimental API and subject to change.
UseNNAPI
void UseNNAPI( bool enable )
Enable or disable NNAPI (true to enable).
Disabled by default.
WARNING: NNAPI cannot be disabled after the graph has been prepared (via
AllocateTensors
) with NNAPI enabled.
WARNING: This API is deprecated, prefer using the NNAPI delegate directly. This method will be removed in a future release.
execution_plan
const std::vector< int > & execution_plan() const
WARNING: Experimental interface, subject to change.
input_tensor
TfLiteTensor * input_tensor( size_t index )
Return a mutable pointer to the given input tensor.
The given index must be between 0 and inputs() .size().
input_tensor
const TfLiteTensor * input_tensor( size_t index ) const
Return an immutable pointerto the given input tensor.
The given index must be between 0 and inputs() .size().
inputs
const std::vector< int > & inputs() const
Read only access to list of inputs.
node_and_registration
const std::pair< TfLiteNode, TfLiteRegistration > * node_and_registration( int node_index ) const
Get a pointer to an operation and registration data structure if in bounds.
nodes_size
size_t nodes_size() const
Return the number of ops in the model.
output_tensor
TfLiteTensor * output_tensor( size_t index )
Return a mutable pointer to the given output tensor.
The given index must be between 0 and outputs() .size().
output_tensor
const TfLiteTensor * output_tensor( size_t index ) const
Return an immutable pointer to the given output tensor.
The given index must be between 0 and outputs() .size().
outputs
const std::vector< int > & outputs() const
Read only access to list of outputs.
tensor
TfLiteTensor * tensor( int tensor_index )
Get a mutable tensor data structure.
tensor
const TfLiteTensor * tensor( int tensor_index ) const
Get an immutable tensor data structure.
tensors_size
size_t tensors_size() const
Return the number of tensors in the model.
typed_input_tensor
T * typed_input_tensor( int index )
Return a mutable pointer into the data of a given input tensor.
The given index must be between 0 and inputs() .size().
typed_input_tensor
const T * typed_input_tensor( int index ) const
Return an immutable pointer into the data of a given input tensor.
The given index must be between 0 and inputs() .size().
typed_output_tensor
T * typed_output_tensor( int index )
Return a mutable pointer into the data of a given output tensor.
The given index must be between 0 and outputs() .size().
typed_output_tensor
const T * typed_output_tensor( int index ) const
Return an immutable pointer into the data of a given output tensor.
The given index must be between 0 and outputs() .size().
typed_tensor
T * typed_tensor( int tensor_index )
Perform a checked cast to the appropriate tensor type (mutable pointer version).
typed_tensor
const T * typed_tensor( int tensor_index ) const
Perform a checked cast to the appropriate tensor type (immutable pointer version).
variables
const std::vector< int > & variables() const
Read only access to list of variable tensors.
~Interpreter
~Interpreter()