TensorFlowLiteSwift Framework Reference

Options

public struct Options : Equatable, Hashable

Options for configuring the Interpreter.

  • The maximum number of CPU threads that the interpreter should run on. The default is nil indicating that the Interpreter will decide the number of threads to use.

    Declaration

    Swift

    public var threadCount: Int?
  • Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is enabled.

    Experiment

    Enabling this flag will enable use of a new, highly optimized set of CPU kernels provided via the XNNPACK delegate. Currently, this is restricted to a subset of floating point operations. Eventually, we plan to enable this by default, as it can provide significant performance benefits for many classes of floating point models. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/README.md for more details.

    Important

    Things to keep in mind when enabling this flag:

    • Startup time and resize time may increase.
    • Baseline memory consumption may increase.
    • Compatibility with other delegates (e.g., GPU) has not been fully validated.
    • Quantized models will not see any benefit.

    Warning

    This is an experimental interface that is subject to change.

    Declaration

    Swift

    public var isXNNPackEnabled: Bool
  • Creates a new instance with the default values.

    Declaration

    Swift

    public init()