TensorFlow Lite Model Analyzer

在 TensorFlow.org 上查看 在 Google Colab 运行 在 Github 上查看源代码 下载笔记本

TensorFlow Lite Model Analyzer API 能够通过列出模型的结构,帮助您分析 TensorFlow Lite 格式的模型。

Model Analyzer API

以下 API 可用于 TensorFlow Lite Model Analyzer。

tf.lite.experimental.Analyzer.analyze(model_path=None,
                                      model_content=None,
                                      gpu_compatibility=False)

您可以在 https://tensorflow.google.cn/api_docs/python/tf/lite/experimental/Analyzer 查看 API 详细信息,也可以在 Python 终端运行 help(tf.lite.experimental.Analyzer.analyze)

简单 Keras 模型的基本用法

以下代码显示了 Model Analyzer 的基本用法。它在 TFLite 模型内容中显示转换后的 Keras 模型的内容,格式化为平面缓冲区对象。

import tensorflow as tf

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(128, 128)),
  tf.keras.layers.Dense(256, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()

tf.lite.experimental.Analyzer.analyze(model_content=fb_model)
2023-11-07 21:45:41.529687: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-11-07 21:45:41.529736: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-11-07 21:45:41.531432: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp7hlu382v/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp7hlu382v/assets
2023-11-07 21:45:46.682524: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2023-11-07 21:45:46.682561: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------

 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 3, Total Ops 10, % non-converted = 30.00 %
 * 3 ARITH ops

- arith.constant:    3 occurrences  (f32: 2, i32: 1)



  (f32: 2)

  (f32: 1)
=== TFLite ModelAnalyzer ===

Your TFLite model has '1' subgraph(s). In the subgraph description below,
T# represents the Tensor numbers. For example, in Subgraph#0, the RESHAPE op takes
tensor #0 and tensor #1 as input and produces tensor #4 as output.

Subgraph#0 main(T#0) -> [T#6]
  Op#0 RESHAPE(T#0, T#1[-1, 16384]) -> [T#4]
  Op#1 FULLY_CONNECTED(T#4, T#2, T#-1) -> [T#5]
  Op#2 FULLY_CONNECTED(T#5, T#3, T#-1) -> [T#6]

Tensors of Subgraph#0
  T#0(serving_default_flatten_input:0) shape_signature:[-1, 128, 128], type:FLOAT32
  T#1(sequential/flatten/Const) shape:[2], type:INT32 RO 8 bytes, buffer: 2, data:[-1, 16384]
  T#2(sequential/dense/MatMul1) shape:[256, 16384], type:FLOAT32 RO 16777216 bytes, buffer: 3, data:[-0.00812459, -0.00690744, -0.00139531, 0.00898227, 0.000884177, ...]
  T#3(sequential/dense_1/MatMul) shape:[10, 256], type:FLOAT32 RO 10240 bytes, buffer: 4, data:[0.0648576, 0.128459, -0.0608204, -0.138041, 0.0238634, ...]
  T#4(sequential/flatten/Reshape) shape_signature:[-1, 16384], type:FLOAT32
  T#5(sequential/dense/MatMul;sequential/dense/Relu;sequential/dense/BiasAdd) shape_signature:[-1, 256], type:FLOAT32
  T#6(StatefulPartitionedCall:0) shape_signature:[-1, 10], type:FLOAT32

---------------------------------------------------------------
Your TFLite model has '1' signature_def(s).

Signature#0 key: 'serving_default'

- Subgraph: Subgraph#0
- Inputs: 
    'flatten_input' : T#0
- Outputs: 
    'dense_1' : T#6

---------------------------------------------------------------
              Model size:   16789044 bytes
    Non-data buffer size:       1476 bytes (00.01 %)
  Total data buffer size:   16787568 bytes (99.99 %)
    (Zero value buffers):          0 bytes (00.00 %)

* Buffers of TFLite model are mostly used for constant tensors.
  And zero value buffers are buffers filled with zeros.
  Non-data buffers area are used to store operators, subgraphs and etc.
  You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs

MobileNetV3Large Keras 模型的基本用法

此 API 适用于 MobileNetV3Large 等大型模型。由于输出很大,您可能希望使用您最喜欢的文本编辑器来浏览它。

model = tf.keras.applications.MobileNetV3Large()
fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()

tf.lite.experimental.Analyzer.analyze(model_content=fb_model)
WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not 224. Weights for input shape (224, 224) will be loaded as the default.
WARNING:tensorflow:`input_shape` is undefined or non-square, or `rows` is not 224. Weights for input shape (224, 224) will be loaded as the default.
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v3/weights_mobilenet_v3_large_224_1.0_float.h5
22661472/22661472 [==============================] - 0s 0us/step
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp3vojvla1/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp3vojvla1/assets
2023-11-07 21:46:12.998680: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2023-11-07 21:46:12.998732: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------

 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 135, Total Ops 266, % non-converted = 50.75 %
 * 135 ARITH ops

- arith.constant:  135 occurrences  (f32: 131, i32: 4)



  (f32: 11)
  (f32: 49)
  (f32: 15)
  (f32: 21)
  (f32: 9)
  (f32: 17)
  (f32: 4)
  (f32: 1)
  (f32: 1)
=== TFLite ModelAnalyzer ===

Your TFLite model has '1' subgraph(s). In the subgraph description below,
T# represents the Tensor numbers. For example, in Subgraph#0, the MUL op takes
tensor #0 and tensor #133 as input and produces tensor #136 as output.

Subgraph#0 main(T#0) -> [T#263]
  Op#0 MUL(T#0, T#133) -> [T#136]
  Op#1 ADD(T#136, T#134) -> [T#137]
  Op#2 CONV_2D(T#137, T#80, T#37) -> [T#138]
  Op#3 HARD_SWISH(T#138) -> [T#139]
  Op#4 DEPTHWISE_CONV_2D(T#139, T#38, T#1) -> [T#140]
  Op#5 CONV_2D(T#140, T#81, T#39) -> [T#141]
  Op#6 ADD(T#139, T#141) -> [T#142]
  Op#7 CONV_2D(T#142, T#82, T#2) -> [T#143]
  Op#8 PAD(T#143, T#129[0, 0, 0, 1, 0, ...]) -> [T#144]
  Op#9 DEPTHWISE_CONV_2D(T#144, T#40, T#3) -> [T#145]
  Op#10 CONV_2D(T#145, T#83, T#41) -> [T#146]
  Op#11 CONV_2D(T#146, T#84, T#4) -> [T#147]
  Op#12 DEPTHWISE_CONV_2D(T#147, T#42, T#5) -> [T#148]
  Op#13 CONV_2D(T#148, T#85, T#43) -> [T#149]
  Op#14 ADD(T#146, T#149) -> [T#150]
  Op#15 CONV_2D(T#150, T#86, T#6) -> [T#151]
  Op#16 PAD(T#151, T#131[0, 0, 1, 2, 1, ...]) -> [T#152]
  Op#17 DEPTHWISE_CONV_2D(T#152, T#44, T#7) -> [T#153]
  Op#18 MEAN(T#153, T#130[1, 2]) -> [T#154]
  Op#19 CONV_2D(T#154, T#87, T#8) -> [T#155]
  Op#20 CONV_2D(T#155, T#88, T#9) -> [T#156]
  Op#21 MUL(T#156, T#135) -> [T#157]
  Op#22 MUL(T#153, T#157) -> [T#158]
  Op#23 CONV_2D(T#158, T#89, T#45) -> [T#159]
  Op#24 CONV_2D(T#159, T#90, T#10) -> [T#160]
  Op#25 DEPTHWISE_CONV_2D(T#160, T#46, T#11) -> [T#161]
  Op#26 MEAN(T#161, T#130[1, 2]) -> [T#162]
  Op#27 CONV_2D(T#162, T#91, T#12) -> [T#163]
  Op#28 CONV_2D(T#163, T#92, T#13) -> [T#164]
  Op#29 MUL(T#164, T#135) -> [T#165]
  Op#30 MUL(T#161, T#165) -> [T#166]
  Op#31 CONV_2D(T#166, T#93, T#47) -> [T#167]
  Op#32 ADD(T#159, T#167) -> [T#168]
  Op#33 CONV_2D(T#168, T#94, T#14) -> [T#169]
  Op#34 DEPTHWISE_CONV_2D(T#169, T#48, T#15) -> [T#170]
  Op#35 MEAN(T#170, T#130[1, 2]) -> [T#171]
  Op#36 CONV_2D(T#171, T#95, T#16) -> [T#172]
  Op#37 CONV_2D(T#172, T#96, T#17) -> [T#173]
  Op#38 MUL(T#173, T#135) -> [T#174]
  Op#39 MUL(T#170, T#174) -> [T#175]
  Op#40 CONV_2D(T#175, T#97, T#49) -> [T#176]
  Op#41 ADD(T#168, T#176) -> [T#177]
  Op#42 CONV_2D(T#177, T#98, T#50) -> [T#178]
  Op#43 HARD_SWISH(T#178) -> [T#179]
  Op#44 PAD(T#179, T#129[0, 0, 0, 1, 0, ...]) -> [T#180]
  Op#45 DEPTHWISE_CONV_2D(T#180, T#51, T#18) -> [T#181]
  Op#46 HARD_SWISH(T#181) -> [T#182]
  Op#47 CONV_2D(T#182, T#99, T#52) -> [T#183]
  Op#48 CONV_2D(T#183, T#100, T#53) -> [T#184]
  Op#49 HARD_SWISH(T#184) -> [T#185]
  Op#50 DEPTHWISE_CONV_2D(T#185, T#54, T#19) -> [T#186]
  Op#51 HARD_SWISH(T#186) -> [T#187]
  Op#52 CONV_2D(T#187, T#101, T#55) -> [T#188]
  Op#53 ADD(T#183, T#188) -> [T#189]
  Op#54 CONV_2D(T#189, T#102, T#56) -> [T#190]
  Op#55 HARD_SWISH(T#190) -> [T#191]
  Op#56 DEPTHWISE_CONV_2D(T#191, T#57, T#20) -> [T#192]
  Op#57 HARD_SWISH(T#192) -> [T#193]
  Op#58 CONV_2D(T#193, T#103, T#58) -> [T#194]
  Op#59 ADD(T#189, T#194) -> [T#195]
  Op#60 CONV_2D(T#195, T#104, T#59) -> [T#196]
  Op#61 HARD_SWISH(T#196) -> [T#197]
  Op#62 DEPTHWISE_CONV_2D(T#197, T#60, T#21) -> [T#198]
  Op#63 HARD_SWISH(T#198) -> [T#199]
  Op#64 CONV_2D(T#199, T#105, T#61) -> [T#200]
  Op#65 ADD(T#195, T#200) -> [T#201]
  Op#66 CONV_2D(T#201, T#106, T#62) -> [T#202]
  Op#67 HARD_SWISH(T#202) -> [T#203]
  Op#68 DEPTHWISE_CONV_2D(T#203, T#63, T#22) -> [T#204]
  Op#69 HARD_SWISH(T#204) -> [T#205]
  Op#70 MEAN(T#205, T#130[1, 2]) -> [T#206]
  Op#71 CONV_2D(T#206, T#107, T#23) -> [T#207]
  Op#72 CONV_2D(T#207, T#108, T#24) -> [T#208]
  Op#73 MUL(T#208, T#135) -> [T#209]
  Op#74 MUL(T#205, T#209) -> [T#210]
  Op#75 CONV_2D(T#210, T#109, T#64) -> [T#211]
  Op#76 CONV_2D(T#211, T#110, T#65) -> [T#212]
  Op#77 HARD_SWISH(T#212) -> [T#213]
  Op#78 DEPTHWISE_CONV_2D(T#213, T#66, T#25) -> [T#214]
  Op#79 HARD_SWISH(T#214) -> [T#215]
  Op#80 MEAN(T#215, T#130[1, 2]) -> [T#216]
  Op#81 CONV_2D(T#216, T#111, T#26) -> [T#217]
  Op#82 CONV_2D(T#217, T#112, T#27) -> [T#218]
  Op#83 MUL(T#218, T#135) -> [T#219]
  Op#84 MUL(T#215, T#219) -> [T#220]
  Op#85 CONV_2D(T#220, T#113, T#67) -> [T#221]
  Op#86 ADD(T#211, T#221) -> [T#222]
  Op#87 CONV_2D(T#222, T#114, T#68) -> [T#223]
  Op#88 HARD_SWISH(T#223) -> [T#224]
  Op#89 PAD(T#224, T#131[0, 0, 1, 2, 1, ...]) -> [T#225]
  Op#90 DEPTHWISE_CONV_2D(T#225, T#69, T#28) -> [T#226]
  Op#91 HARD_SWISH(T#226) -> [T#227]
  Op#92 MEAN(T#227, T#130[1, 2]) -> [T#228]
  Op#93 CONV_2D(T#228, T#115, T#29) -> [T#229]
  Op#94 CONV_2D(T#229, T#116, T#30) -> [T#230]
  Op#95 MUL(T#230, T#135) -> [T#231]
  Op#96 MUL(T#227, T#231) -> [T#232]
  Op#97 CONV_2D(T#232, T#117, T#70) -> [T#233]
  Op#98 CONV_2D(T#233, T#118, T#71) -> [T#234]
  Op#99 HARD_SWISH(T#234) -> [T#235]
  Op#100 DEPTHWISE_CONV_2D(T#235, T#72, T#31) -> [T#236]
  Op#101 HARD_SWISH(T#236) -> [T#237]
  Op#102 MEAN(T#237, T#130[1, 2]) -> [T#238]
  Op#103 CONV_2D(T#238, T#119, T#32) -> [T#239]
  Op#104 CONV_2D(T#239, T#120, T#33) -> [T#240]
  Op#105 MUL(T#240, T#135) -> [T#241]
  Op#106 MUL(T#237, T#241) -> [T#242]
  Op#107 CONV_2D(T#242, T#121, T#73) -> [T#243]
  Op#108 ADD(T#233, T#243) -> [T#244]
  Op#109 CONV_2D(T#244, T#122, T#74) -> [T#245]
  Op#110 HARD_SWISH(T#245) -> [T#246]
  Op#111 DEPTHWISE_CONV_2D(T#246, T#75, T#34) -> [T#247]
  Op#112 HARD_SWISH(T#247) -> [T#248]
  Op#113 MEAN(T#248, T#130[1, 2]) -> [T#249]
  Op#114 CONV_2D(T#249, T#123, T#35) -> [T#250]
  Op#115 CONV_2D(T#250, T#124, T#36) -> [T#251]
  Op#116 MUL(T#251, T#135) -> [T#252]
  Op#117 MUL(T#248, T#252) -> [T#253]
  Op#118 CONV_2D(T#253, T#125, T#76) -> [T#254]
  Op#119 ADD(T#244, T#254) -> [T#255]
  Op#120 CONV_2D(T#255, T#126, T#77) -> [T#256]
  Op#121 HARD_SWISH(T#256) -> [T#257]
  Op#122 MEAN(T#257, T#130[1, 2]) -> [T#258]
  Op#123 CONV_2D(T#258, T#127, T#78) -> [T#259]
  Op#124 HARD_SWISH(T#259) -> [T#260]
  Op#125 CONV_2D(T#260, T#128, T#79) -> [T#261]
  Op#126 RESHAPE(T#261, T#132[-1, 1000]) -> [T#262]
  Op#127 SOFTMAX(T#262) -> [T#263]

Tensors of Subgraph#0
  T#0(serving_default_input_1:0) shape_signature:[-1, -1, -1, 3], type:FLOAT32
  T#1(MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, buffer: 2, data:[1.62813, 33.7453, 4.72859, 8.78206, 17.5393, ...]
  T#2(MobilenetV3large/expanded_conv_1/expand/BatchNorm/FusedBatchNormV3) shape:[64], type:FLOAT32 RO 256 bytes, buffer: 3, data:[5.83326, 7.79689, 5.9951, -0.769312, 8.54113, ...]
  T#3(MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3) shape:[64], type:FLOAT32 RO 256 bytes, buffer: 4, data:[6.24156, 0.981198, 2.53471, -0.0248699, 25.7691, ...]
  T#4(MobilenetV3large/expanded_conv_2/expand/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, buffer: 5, data:[3.17699, 2.28101, 1.58534, 2.71796, 1.68366, ...]
  T#5(MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, buffer: 6, data:[0.586533, 0.863577, 0.484086, -8.43705, 7.50718, ...]
  T#6(MobilenetV3large/expanded_conv_3/expand/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, buffer: 7, data:[-0.498766, -0.309574, 0.104518, 2.44678, 1.72927, ...]
  T#7(MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3) shape:[72], type:FLOAT32 RO 288 bytes, buffer: 8, data:[1.70499, 18.0012, 1.05503, 10.0129, -2.74094, ...]
  T#8(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[24], type:FLOAT32 RO 96 bytes, buffer: 9, data:[1.14102, -0.02167, -0.01928, -0.0118068, 0.218227, ...]
  T#9(MobilenetV3large/re_lu_8/Relu6;MobilenetV3large/tf.__operators__.add_1/AddV2;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[72], type:FLOAT32 RO 288 bytes, buffer: 10, data:[5.06759, 6.06202, 5.33617, 6.0275, 4.7227, ...]
  T#10(MobilenetV3large/expanded_conv_4/expand/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 11, data:[3.30378, 2.60396, 2.83121, -4.14912, 2.59554, ...]
  T#11(MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 12, data:[-0.219226, 0.464636, -0.288737, -2.38097, -0.334142, ...]
  T#12(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[32], type:FLOAT32 RO 128 bytes, buffer: 13, data:[-0.0122205, 1.39665, 0.193353, 1.20499, -0.000705811, ...]
  T#13(MobilenetV3large/re_lu_11/Relu6;MobilenetV3large/tf.__operators__.add_2/AddV2;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 14, data:[3.12207, 4.98045, 2.80049, 2.3461, 3.47311, ...]
  T#14(MobilenetV3large/expanded_conv_5/expand/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 15, data:[1.19221, -1.76372, 2.7938, 3.13965, -0.732204, ...]
  T#15(MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 16, data:[-2.55795, -2.85519, -0.168461, 3.99681, -2.29523, ...]
  T#16(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[32], type:FLOAT32 RO 128 bytes, buffer: 17, data:[0.920288, -0.00382053, -0.0567493, 1.97454, 3.35371, ...]
  T#17(MobilenetV3large/re_lu_14/Relu6;MobilenetV3large/tf.__operators__.add_3/AddV2;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 18, data:[1.03397, -0.18951, 3.24036, 1.176, 2.22316, ...]
  T#18(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3) shape:[240], type:FLOAT32 RO 960 bytes, buffer: 19, data:[2.15248, 1.62511, 4.58976, 2.86807, 1.67084, ...]
  T#19(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3) shape:[200], type:FLOAT32 RO 800 bytes, buffer: 20, data:[-1.90742, -1.52078, 4.21308, -1.51046, -1.52174, ...]
  T#20(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, buffer: 21, data:[-2.47649, -2.20832, -1.40136, -0.623928, -1.61101, ...]
  T#21(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, buffer: 22, data:[-1.82527, -1.90425, -0.864828, -1.20905, 1.78948, ...]
  T#22(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3) shape:[480], type:FLOAT32 RO 1920 bytes, buffer: 23, data:[-1.14594, -1.2222, 0.493229, -0.806949, -0.123236, ...]
  T#23(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[120], type:FLOAT32 RO 480 bytes, buffer: 24, data:[0.162616, 0.0211225, -0.00731861, 0.275613, 0.465336, ...]
  T#24(MobilenetV3large/re_lu_25/Relu6;MobilenetV3large/tf.__operators__.add_14/AddV2;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[480], type:FLOAT32 RO 1920 bytes, buffer: 25, data:[0.765333, 0.628963, 5.4054, 4.91936, 2.86523, ...]
  T#25(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 26, data:[-2.30358, -1.0415, -1.02916, -2.42349, -0.143203, ...]
  T#26(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[168], type:FLOAT32 RO 672 bytes, buffer: 27, data:[-0.0489284, 0.178251, -0.0412987, -0.205209, 0.0695921, ...]
  T#27(MobilenetV3large/re_lu_28/Relu6;MobilenetV3large/tf.__operators__.add_17/AddV2;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 28, data:[0.291311, 1.62599, 0.179997, 0.249016, 2.76901, ...]
  T#28(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 29, data:[1.35255, 0.0874219, 0.716237, 0.865584, 1.82332, ...]
  T#29(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[168], type:FLOAT32 RO 672 bytes, buffer: 30, data:[-0.499907, 0.0375283, -0.0576132, -0.243811, -0.391691, ...]
  T#30(MobilenetV3large/re_lu_31/Relu6;MobilenetV3large/tf.__operators__.add_20/AddV2;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 31, data:[2.06113, 0.736983, 4.40858, 2.36386, 0.687798, ...]
  T#31(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 32, data:[-1.22443, -0.854031, 1.91604, -3.2009, 0.110498, ...]
  T#32(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[240], type:FLOAT32 RO 960 bytes, buffer: 33, data:[-0.295283, -0.171183, -0.491539, -0.201764, -0.0582549, ...]
  T#33(MobilenetV3large/re_lu_34/Relu6;MobilenetV3large/tf.__operators__.add_23/AddV2;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 34, data:[0.195665, 0.217341, 0.114345, -0.0316076, 0.281505, ...]
  T#34(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 35, data:[-1.81109, 1.68503, 1.58476, 1.70023, 0.342517, ...]
  T#35(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape:[240], type:FLOAT32 RO 960 bytes, buffer: 36, data:[-0.275301, -0.0277678, -0.411228, -0.3586, -0.220745, ...]
  T#36(MobilenetV3large/re_lu_37/Relu6;MobilenetV3large/tf.__operators__.add_26/AddV2;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 37, data:[0.283771, 0.24407, 0.243922, 1.221, 0.460753, ...]
  T#37(MobilenetV3large/Conv/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, buffer: 38, data:[26.8229, 27.4359, 2.7004, 6.57344, 25.2757, ...]
  T#38(MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/depthwise/depthwise;MobilenetV3large/expanded_conv/project/Conv2D) shape:[1, 3, 3, 16], type:FLOAT32 RO 576 bytes, buffer: 39, data:[1.22061, -0.810988, -0.595521, -0.12323, 0.128769, ...]
  T#39(MobilenetV3large/expanded_conv/project/BatchNorm/FusedBatchNormV3) shape:[16], type:FLOAT32 RO 64 bytes, buffer: 40, data:[-0.0141129, 49.9822, 9.52096, -9.69061, -4.32951, ...]
  T#40(MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise) shape:[1, 3, 3, 64], type:FLOAT32 RO 2304 bytes, buffer: 41, data:[-7.61981, 0.609866, -0.72154, 1.24176, -0.446165, ...]
  T#41(MobilenetV3large/expanded_conv_1/project/BatchNorm/FusedBatchNormV3) shape:[24], type:FLOAT32 RO 96 bytes, buffer: 42, data:[29.5271, -13.7881, -51.1199, 3.50073, -7.02167, ...]
  T#42(MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_2/depthwise/depthwise;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 72], type:FLOAT32 RO 2592 bytes, buffer: 43, data:[-0.196813, -0.0441104, -0.806084, 0.0801485, 0.182848, ...]
  T#43(MobilenetV3large/expanded_conv_2/project/BatchNorm/FusedBatchNormV3) shape:[24], type:FLOAT32 RO 96 bytes, buffer: 44, data:[-35.7347, 31.8145, 7.77917, 11.8099, 10.6855, ...]
  T#44(MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/depthwise/depthwise;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[1, 5, 5, 72], type:FLOAT32 RO 7200 bytes, buffer: 45, data:[0.0879386, -0.0954128, 0.0937833, -0.0427546, -0.253503, ...]
  T#45(MobilenetV3large/expanded_conv_3/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, buffer: 46, data:[-21.1494, -0.469508, 14.1144, -5.10523, -9.47186, ...]
  T#46(MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_4/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[1, 5, 5, 120], type:FLOAT32 RO 12000 bytes, buffer: 47, data:[0.0256352, 0.00875715, -0.00830248, 0.0426244, 0.00442088, ...]
  T#47(MobilenetV3large/expanded_conv_4/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, buffer: 48, data:[-7.63688, 1.21586, -22.5861, 0.739685, -3.0402, ...]
  T#48(MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[1, 5, 5, 120], type:FLOAT32 RO 12000 bytes, buffer: 49, data:[-0.0563561, -0.887496, 0.0099917, 0.166615, 0.101625, ...]
  T#49(MobilenetV3large/expanded_conv_5/project/BatchNorm/FusedBatchNormV3) shape:[40], type:FLOAT32 RO 160 bytes, buffer: 50, data:[-3.93807, 1.26509, -0.947863, 31.8655, 3.26632, ...]
  T#50(MobilenetV3large/expanded_conv_6/expand/BatchNorm/FusedBatchNormV3) shape:[240], type:FLOAT32 RO 960 bytes, buffer: 51, data:[-3.03785, -3.20833, -1.26339, -0.875435, -0.410649, ...]
  T#51(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_6/depthwise/depthwise;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D) shape:[1, 3, 3, 240], type:FLOAT32 RO 8640 bytes, buffer: 52, data:[0.507291, 0.915944, 0.881445, 0.338672, -0.261484, ...]
  T#52(MobilenetV3large/expanded_conv_6/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, buffer: 53, data:[-15.6729, 13.1146, 9.85148, 15.7407, -16.4922, ...]
  T#53(MobilenetV3large/expanded_conv_7/expand/BatchNorm/FusedBatchNormV3) shape:[200], type:FLOAT32 RO 800 bytes, buffer: 54, data:[-0.0180047, 0.000351542, 2.84978, 0.00512768, -0.0474478, ...]
  T#54(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise) shape:[1, 3, 3, 200], type:FLOAT32 RO 7200 bytes, buffer: 55, data:[-0.0930532, 1.35916, 0.0699976, 2.08309, -0.714721, ...]
  T#55(MobilenetV3large/expanded_conv_7/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, buffer: 56, data:[2.75995, -0.155923, 2.06222, -4.97617, -12.3297, ...]
  T#56(MobilenetV3large/expanded_conv_8/expand/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, buffer: 57, data:[1.78543, 1.23138, -0.31343, -2.65884, 2.16531, ...]
  T#57(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_8/depthwise/depthwise;MobilenetV3large/expanded_conv_9/depthwise/depthwise) shape:[1, 3, 3, 184], type:FLOAT32 RO 6624 bytes, buffer: 58, data:[0.186774, 0.198745, -0.694211, 0.182543, -0.045065, ...]
  T#58(MobilenetV3large/expanded_conv_8/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, buffer: 59, data:[-1.10751, -3.36157, 0.340627, 2.23085, -0.46187, ...]
  T#59(MobilenetV3large/expanded_conv_9/expand/BatchNorm/FusedBatchNormV3) shape:[184], type:FLOAT32 RO 736 bytes, buffer: 60, data:[0.213268, 0.0483445, -0.11253, 0.0761342, -1.73988, ...]
  T#60(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise) shape:[1, 3, 3, 184], type:FLOAT32 RO 6624 bytes, buffer: 61, data:[4.97419, -6.57637, 0.814417, 1.46725, 0.457797, ...]
  T#61(MobilenetV3large/expanded_conv_9/project/BatchNorm/FusedBatchNormV3) shape:[80], type:FLOAT32 RO 320 bytes, buffer: 62, data:[-0.435609, -2.97176, 2.74412, -6.65204, 10.2386, ...]
  T#62(MobilenetV3large/expanded_conv_10/expand/BatchNorm/FusedBatchNormV3) shape:[480], type:FLOAT32 RO 1920 bytes, buffer: 63, data:[2.50857, 0.0973693, -0.563608, -1.45203, 3.44066, ...]
  T#63(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 480], type:FLOAT32 RO 17280 bytes, buffer: 64, data:[-0.0212238, 6.44594, 0.0537825, 0.22657, -0.0316337, ...]
  T#64(MobilenetV3large/expanded_conv_10/project/BatchNorm/FusedBatchNormV3) shape:[112], type:FLOAT32 RO 448 bytes, buffer: 65, data:[-4.96088, -3.39939, 4.19718, 2.48631, -1.34157, ...]
  T#65(MobilenetV3large/expanded_conv_11/expand/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 66, data:[2.41973, -1.5073, -0.00963159, -0.640254, 0.684952, ...]
  T#66(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[1, 3, 3, 672], type:FLOAT32 RO 24192 bytes, buffer: 67, data:[0.0253853, 0.0641128, 1.57708, 0.0533236, -0.00350431, ...]
  T#67(MobilenetV3large/expanded_conv_11/project/BatchNorm/FusedBatchNormV3) shape:[112], type:FLOAT32 RO 448 bytes, buffer: 68, data:[0.365523, 1.10257, -1.63187, 0.706468, 0.487061, ...]
  T#68(MobilenetV3large/expanded_conv_12/expand/BatchNorm/FusedBatchNormV3) shape:[672], type:FLOAT32 RO 2688 bytes, buffer: 69, data:[0.573135, -0.726054, 0.0182186, -0.206486, -1.48872, ...]
  T#69(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[1, 5, 5, 672], type:FLOAT32 RO 67200 bytes, buffer: 70, data:[-0.00694154, 0.0356305, -0.195693, -0.0262144, 0.114805, ...]
  T#70(MobilenetV3large/expanded_conv_12/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, buffer: 71, data:[-3.44684, -0.768017, -0.969108, 1.23336, -2.86966, ...]
  T#71(MobilenetV3large/expanded_conv_13/expand/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 72, data:[-0.0174746, 0.0162077, -1.22728, 0.279187, -0.554711, ...]
  T#72(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_13/depthwise/depthwise;MobilenetV3large/Conv_1/Conv2D) shape:[1, 5, 5, 960], type:FLOAT32 RO 96000 bytes, buffer: 73, data:[4.83438, -1.51938, -0.324659, -0.391306, -0.01447, ...]
  T#73(MobilenetV3large/expanded_conv_13/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, buffer: 74, data:[1.56244, -9.28569, -6.53591, 2.84496, -5.46389, ...]
  T#74(MobilenetV3large/expanded_conv_14/expand/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 75, data:[-0.213042, -1.64993, -1.58605, 3.29836, -0.697594, ...]
  T#75(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/depthwise/depthwise;MobilenetV3large/Conv_1/Conv2D) shape:[1, 5, 5, 960], type:FLOAT32 RO 96000 bytes, buffer: 76, data:[-0.152749, -0.152966, 0.23392, 0.00429554, -0.286706, ...]
  T#76(MobilenetV3large/expanded_conv_14/project/BatchNorm/FusedBatchNormV3) shape:[160], type:FLOAT32 RO 640 bytes, buffer: 77, data:[-1.10576, -6.84556, -0.464385, 3.1173, -3.98359, ...]
  T#77(MobilenetV3large/Conv_1/BatchNorm/FusedBatchNormV3) shape:[960], type:FLOAT32 RO 3840 bytes, buffer: 78, data:[-16.2774, -9.93764, -6.33422, -11.2148, -6.04787, ...]
  T#78(MobilenetV3large/Conv_2/BiasAdd/ReadVariableOp) shape:[1280], type:FLOAT32 RO 5120 bytes, buffer: 79, data:[0.144575, 0.590702, 0.13199, 0.725449, -0.299175, ...]
  T#79(MobilenetV3large/Logits/BiasAdd/ReadVariableOp) shape:[1000], type:FLOAT32 RO 4000 bytes, buffer: 80, data:[-0.073695, -0.0658332, -0.00686596, 0.0479387, 0.0198878, ...]
  T#80(MobilenetV3large/Conv/Conv2D) shape:[16, 3, 3, 3], type:FLOAT32 RO 1728 bytes, buffer: 81, data:[2.35286, -1.02746, -1.03095, 3.50268, -1.58557, ...]
  T#81(MobilenetV3large/expanded_conv/project/Conv2D) shape:[16, 1, 1, 16], type:FLOAT32 RO 1024 bytes, buffer: 82, data:[1.92682e-06, -1.08435e-05, -4.07661e-05, 7.29718e-05, -4.40744e-07, ...]
  T#82(MobilenetV3large/expanded_conv_1/expand/Conv2D) shape:[64, 1, 1, 16], type:FLOAT32 RO 4096 bytes, buffer: 83, data:[-0.000504434, 0.00025894, -0.000541954, 0.00443714, 0.00453425, ...]
  T#83(MobilenetV3large/expanded_conv_1/project/Conv2D) shape:[24, 1, 1, 64], type:FLOAT32 RO 6144 bytes, buffer: 84, data:[-0.0101949, 0.105657, -0.0099966, -0.140536, 0.0846852, ...]
  T#84(MobilenetV3large/expanded_conv_2/expand/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, buffer: 85, data:[-0.135531, 0.0593861, -0.00241981, 0.0486889, 0.00179526, ...]
  T#85(MobilenetV3large/expanded_conv_2/project/Conv2D) shape:[24, 1, 1, 72], type:FLOAT32 RO 6912 bytes, buffer: 86, data:[-0.291106, 0.053235, 0.518672, -1.19898, 0.418507, ...]
  T#86(MobilenetV3large/expanded_conv_3/expand/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, buffer: 87, data:[0.0293405, -0.0246265, 0.0406672, -0.019213, 0.0562144, ...]
  T#87(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D) shape:[24, 1, 1, 72], type:FLOAT32 RO 6912 bytes, buffer: 88, data:[-0.00278714, -0.00356496, -0.00289361, -0.00207177, -0.000909253, ...]
  T#88(MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D) shape:[72, 1, 1, 24], type:FLOAT32 RO 6912 bytes, buffer: 89, data:[0.00195266, 1.24883e-32, 1.24923e-32, 1.2486e-32, 0.000132618, ...]
  T#89(MobilenetV3large/expanded_conv_3/project/Conv2D) shape:[40, 1, 1, 72], type:FLOAT32 RO 11520 bytes, buffer: 90, data:[0.00945878, -0.0363797, 0.0769495, 0.00295284, 0.0174633, ...]
  T#90(MobilenetV3large/expanded_conv_4/expand/Conv2D) shape:[120, 1, 1, 40], type:FLOAT32 RO 19200 bytes, buffer: 91, data:[0.0915326, -0.0077282, 0.0208179, 0.013002, -0.0156502, ...]
  T#91(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/Conv2D) shape:[32, 1, 1, 120], type:FLOAT32 RO 15360 bytes, buffer: 92, data:[-1.27762e-32, -1.26314e-32, -1.26755e-32, 1.25007e-32, -1.25637e-32, ...]
  T#92(MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D) shape:[120, 1, 1, 32], type:FLOAT32 RO 15360 bytes, buffer: 93, data:[-1.24865e-32, 0.0671542, 0.0129759, 0.0901768, -1.7092e-28, ...]
  T#93(MobilenetV3large/expanded_conv_4/project/Conv2D) shape:[40, 1, 1, 120], type:FLOAT32 RO 19200 bytes, buffer: 94, data:[-1.25906, 0.497161, -0.0669599, 0.622439, -0.657298, ...]
  T#94(MobilenetV3large/expanded_conv_5/expand/Conv2D) shape:[120, 1, 1, 40], type:FLOAT32 RO 19200 bytes, buffer: 95, data:[0.00788043, -0.00202614, 0.0314182, 0.00642282, 0.0341095, ...]
  T#95(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D) shape:[32, 1, 1, 120], type:FLOAT32 RO 15360 bytes, buffer: 96, data:[0.0396201, 0.0105455, 0.0124103, 0.0153921, 0.0817555, ...]
  T#96(MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D) shape:[120, 1, 1, 32], type:FLOAT32 RO 15360 bytes, buffer: 97, data:[0.0233749, 1.24947e-32, 1.26503e-32, 0.0765244, -0.00840057, ...]
  T#97(MobilenetV3large/expanded_conv_5/project/Conv2D) shape:[40, 1, 1, 120], type:FLOAT32 RO 19200 bytes, buffer: 98, data:[-0.437176, 0.171119, 0.225141, -0.0630487, 1.52885, ...]
  T#98(MobilenetV3large/expanded_conv_6/expand/Conv2D) shape:[240, 1, 1, 40], type:FLOAT32 RO 38400 bytes, buffer: 99, data:[-0.0238429, 0.00749496, 0.0132094, -0.0011158, 0.00737228, ...]
  T#99(MobilenetV3large/expanded_conv_6/project/Conv2D) shape:[80, 1, 1, 240], type:FLOAT32 RO 76800 bytes, buffer: 100, data:[0.219832, 0.0737946, 0.457842, 0.671469, -0.385924, ...]
  T#100(MobilenetV3large/expanded_conv_7/expand/Conv2D) shape:[200, 1, 1, 80], type:FLOAT32 RO 64000 bytes, buffer: 101, data:[-0.00231777, -0.000986836, 4.6371e-05, -0.00405917, -0.00202406, ...]
  T#101(MobilenetV3large/expanded_conv_7/project/Conv2D) shape:[80, 1, 1, 200], type:FLOAT32 RO 64000 bytes, buffer: 102, data:[0.306236, -0.0031209, 0.0347371, -0.0932839, 0.142599, ...]
  T#102(MobilenetV3large/expanded_conv_8/expand/Conv2D) shape:[184, 1, 1, 80], type:FLOAT32 RO 58880 bytes, buffer: 103, data:[-0.000389813, -0.0035757, -0.000356501, -0.00755378, 0.0180794, ...]
  T#103(MobilenetV3large/expanded_conv_8/project/Conv2D) shape:[80, 1, 1, 184], type:FLOAT32 RO 58880 bytes, buffer: 104, data:[-0.155852, 0.237999, 0.816957, 0.13733, 0.384849, ...]
  T#104(MobilenetV3large/expanded_conv_9/expand/Conv2D) shape:[184, 1, 1, 80], type:FLOAT32 RO 58880 bytes, buffer: 105, data:[-0.0015102, 0.000761898, -0.000109779, 0.000520086, -0.00139291, ...]
  T#105(MobilenetV3large/expanded_conv_9/project/Conv2D) shape:[80, 1, 1, 184], type:FLOAT32 RO 58880 bytes, buffer: 106, data:[0.093478, -0.0599167, 0.0303901, 0.131994, 0.190089, ...]
  T#106(MobilenetV3large/expanded_conv_10/expand/Conv2D) shape:[480, 1, 1, 80], type:FLOAT32 RO 153600 bytes, buffer: 107, data:[-0.00826612, 0.0499581, 0.0647706, 0.0257538, -0.00146656, ...]
  T#107(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D) shape:[120, 1, 1, 480], type:FLOAT32 RO 230400 bytes, buffer: 108, data:[0.0232848, -0.0156344, 0.0118119, 0.00698492, 0.0173483, ...]
  T#108(MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D) shape:[480, 1, 1, 120], type:FLOAT32 RO 230400 bytes, buffer: 109, data:[-0.120188, 0.0748747, -0.183427, 0.0475327, -0.00263915, ...]
  T#109(MobilenetV3large/expanded_conv_10/project/Conv2D) shape:[112, 1, 1, 480], type:FLOAT32 RO 215040 bytes, buffer: 110, data:[-3.1418, 0.424589, 1.22596, -0.396264, 4.54748, ...]
  T#110(MobilenetV3large/expanded_conv_11/expand/Conv2D) shape:[672, 1, 1, 112], type:FLOAT32 RO 301056 bytes, buffer: 111, data:[-0.00188285, -0.0111014, -0.044308, -0.0045087, 0.0132006, ...]
  T#111(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/Conv2D) shape:[168, 1, 1, 672], type:FLOAT32 RO 451584 bytes, buffer: 112, data:[-0.0403199, 0.0110284, -3.64906e-05, 0.052037, 0.00120152, ...]
  T#112(MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D) shape:[672, 1, 1, 168], type:FLOAT32 RO 451584 bytes, buffer: 113, data:[0.312729, -0.0241425, 0.0155115, -0.0770605, 0.0287806, ...]
  T#113(MobilenetV3large/expanded_conv_11/project/Conv2D) shape:[112, 1, 1, 672], type:FLOAT32 RO 301056 bytes, buffer: 114, data:[-0.565866, -1.61402, -0.562591, 3.13697, -2.86662, ...]
  T#114(MobilenetV3large/expanded_conv_12/expand/Conv2D) shape:[672, 1, 1, 112], type:FLOAT32 RO 301056 bytes, buffer: 115, data:[0.0149865, 0.0068462, 0.0173924, -0.00532784, -0.00565932, ...]
  T#115(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D) shape:[168, 1, 1, 672], type:FLOAT32 RO 451584 bytes, buffer: 116, data:[0.0148519, -0.00466832, 0.00745004, 0.00458578, 0.0245794, ...]
  T#116(MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D) shape:[672, 1, 1, 168], type:FLOAT32 RO 451584 bytes, buffer: 117, data:[-0.0652855, -0.19256, 0.0154229, -0.0401333, 0.0346401, ...]
  T#117(MobilenetV3large/expanded_conv_12/project/Conv2D) shape:[160, 1, 1, 672], type:FLOAT32 RO 430080 bytes, buffer: 118, data:[2.30253, -1.31009, 0.118996, 1.40242, 1.30476, ...]
  T#118(MobilenetV3large/expanded_conv_13/expand/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, buffer: 119, data:[0.00076681, -0.000431589, -0.000944783, 0.00120458, 0.00134008, ...]
  T#119(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/Conv2D) shape:[240, 1, 1, 960], type:FLOAT32 RO 921600 bytes, buffer: 120, data:[0.0192977, -0.0183088, 0.168897, -0.0208883, -0.0152427, ...]
  T#120(MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D) shape:[960, 1, 1, 240], type:FLOAT32 RO 921600 bytes, buffer: 121, data:[-0.000973725, -0.000458092, -0.0380375, -0.00309571, 0.0262516, ...]
  T#121(MobilenetV3large/expanded_conv_13/project/Conv2D) shape:[160, 1, 1, 960], type:FLOAT32 RO 614400 bytes, buffer: 122, data:[-0.549766, -6.35918, -2.60246, -5.68154, -1.48906, ...]
  T#122(MobilenetV3large/expanded_conv_14/expand/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, buffer: 123, data:[-0.00653375, 0.00786634, -0.0076777, 0.00238527, 0.00558404, ...]
  T#123(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D) shape:[240, 1, 1, 960], type:FLOAT32 RO 921600 bytes, buffer: 124, data:[-0.0837548, -0.0823375, -0.0502755, 0.00375071, 0.0204295, ...]
  T#124(MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D) shape:[960, 1, 1, 240], type:FLOAT32 RO 921600 bytes, buffer: 125, data:[-0.0611927, -0.0151269, -0.0345105, -0.0179798, 0.0131835, ...]
  T#125(MobilenetV3large/expanded_conv_14/project/Conv2D) shape:[160, 1, 1, 960], type:FLOAT32 RO 614400 bytes, buffer: 126, data:[-2.50493, -3.6528, -3.30272, 0.205339, -1.86695, ...]
  T#126(MobilenetV3large/Conv_1/Conv2D) shape:[960, 1, 1, 160], type:FLOAT32 RO 614400 bytes, buffer: 127, data:[0.0500849, -0.0546926, 0.0198143, -0.0325645, -0.219457, ...]
  T#127(MobilenetV3large/Conv_2/Conv2D) shape:[1280, 1, 1, 960], type:FLOAT32 RO 4915200 bytes, buffer: 128, data:[0.00633656, -0.0184516, -0.0124931, -0.0359429, 0.00918523, ...]
  T#128(MobilenetV3large/Logits/Conv2D) shape:[1000, 1, 1, 1280], type:FLOAT32 RO 5120000 bytes, buffer: 129, data:[0.0232807, -0.0139387, -0.0507069, 0.0257928, -0.0243703, ...]
  T#129(MobilenetV3large/expanded_conv_1/depthwise/pad/Pad/paddings) shape:[4, 2], type:INT32 RO 32 bytes, buffer: 130, data:[0, 0, 0, 1, 0, ...]
  T#130(MobilenetV3large/expanded_conv_10/squeeze_excite/AvgPool/Mean/reduction_indices) shape:[2], type:INT32 RO 8 bytes, buffer: 131, data:[1, 2]
  T#131(MobilenetV3large/expanded_conv_12/depthwise/pad/Pad/paddings) shape:[4, 2], type:INT32 RO 32 bytes, buffer: 132, data:[0, 0, 1, 2, 1, ...]
  T#132(MobilenetV3large/flatten_1/Const) shape:[2], type:INT32 RO 8 bytes, buffer: 133, data:[-1, 1000]
  T#133(MobilenetV3large/rescaling/Cast/x) shape:[], type:FLOAT32 RO 4 bytes, buffer: 134, data:[0.00784314]
  T#134(MobilenetV3large/rescaling/Cast_1/x) shape:[], type:FLOAT32 RO 4 bytes, buffer: 135, data:[-1]
  T#135(MobilenetV3large/tf.math.multiply/Mul/y) shape:[], type:FLOAT32 RO 4 bytes, buffer: 136, data:[0.166667]
  T#136(MobilenetV3large/rescaling/mul) shape_signature:[-1, -1, -1, 3], type:FLOAT32
  T#137(MobilenetV3large/rescaling/add) shape_signature:[-1, -1, -1, 3], type:FLOAT32
  T#138(MobilenetV3large/Conv/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D;MobilenetV3large/Conv/Conv2D) shape_signature:[-1, -1, -1, 16], type:FLOAT32
  T#139(MobilenetV3large/multiply/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu/Relu6;MobilenetV3large/tf.__operators__.add/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply/Mul) shape_signature:[-1, -1, -1, 16], type:FLOAT32
  T#140(MobilenetV3large/re_lu_1/Relu;MobilenetV3large/expanded_conv/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D;MobilenetV3large/expanded_conv/depthwise/depthwise) shape_signature:[-1, -1, -1, 16], type:FLOAT32
  T#141(MobilenetV3large/expanded_conv/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv/project/Conv2D) shape_signature:[-1, -1, -1, 16], type:FLOAT32
  T#142(MobilenetV3large/expanded_conv/Add/add) shape_signature:[-1, -1, -1, 16], type:FLOAT32
  T#143(MobilenetV3large/re_lu_2/Relu;MobilenetV3large/expanded_conv_1/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise;MobilenetV3large/expanded_conv_1/expand/Conv2D) shape_signature:[-1, -1, -1, 64], type:FLOAT32
  T#144(MobilenetV3large/expanded_conv_1/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 64], type:FLOAT32
  T#145(MobilenetV3large/re_lu_3/Relu;MobilenetV3large/expanded_conv_1/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_1/depthwise/depthwise) shape_signature:[-1, -1, -1, 64], type:FLOAT32
  T#146(MobilenetV3large/expanded_conv_1/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_1/project/Conv2D) shape_signature:[-1, -1, -1, 24], type:FLOAT32
  T#147(MobilenetV3large/re_lu_4/Relu;MobilenetV3large/expanded_conv_2/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_2/expand/Conv2D) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#148(MobilenetV3large/re_lu_5/Relu;MobilenetV3large/expanded_conv_2/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_2/depthwise/depthwise) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#149(MobilenetV3large/expanded_conv_2/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_2/project/Conv2D) shape_signature:[-1, -1, -1, 24], type:FLOAT32
  T#150(MobilenetV3large/expanded_conv_2/Add/add) shape_signature:[-1, -1, -1, 24], type:FLOAT32
  T#151(MobilenetV3large/re_lu_6/Relu;MobilenetV3large/expanded_conv_3/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_3/expand/Conv2D) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#152(MobilenetV3large/expanded_conv_3/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#153(MobilenetV3large/re_lu_7/Relu;MobilenetV3large/expanded_conv_3/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_3/depthwise/depthwise) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#154(MobilenetV3large/expanded_conv_3/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 72], type:FLOAT32
  T#155(MobilenetV3large/expanded_conv_3/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 24], type:FLOAT32
  T#156(MobilenetV3large/re_lu_8/Relu6;MobilenetV3large/tf.__operators__.add_1/AddV2;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_3/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 72], type:FLOAT32
  T#157(MobilenetV3large/tf.math.multiply_1/Mul) shape_signature:[-1, 1, 1, 72], type:FLOAT32
  T#158(MobilenetV3large/expanded_conv_3/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 72], type:FLOAT32
  T#159(MobilenetV3large/expanded_conv_3/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D;MobilenetV3large/expanded_conv_3/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32
  T#160(MobilenetV3large/re_lu_9/Relu;MobilenetV3large/expanded_conv_4/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/expand/Conv2D) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#161(MobilenetV3large/re_lu_10/Relu;MobilenetV3large/expanded_conv_4/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/depthwise/depthwise) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#162(MobilenetV3large/expanded_conv_4/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#163(MobilenetV3large/expanded_conv_4/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 32], type:FLOAT32
  T#164(MobilenetV3large/re_lu_11/Relu6;MobilenetV3large/tf.__operators__.add_2/AddV2;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_4/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#165(MobilenetV3large/tf.math.multiply_2/Mul) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#166(MobilenetV3large/expanded_conv_4/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#167(MobilenetV3large/expanded_conv_4/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D;MobilenetV3large/expanded_conv_4/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32
  T#168(MobilenetV3large/expanded_conv_4/Add/add) shape_signature:[-1, -1, -1, 40], type:FLOAT32
  T#169(MobilenetV3large/re_lu_12/Relu;MobilenetV3large/expanded_conv_5/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/expand/Conv2D) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#170(MobilenetV3large/re_lu_13/Relu;MobilenetV3large/expanded_conv_5/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/depthwise/depthwise) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#171(MobilenetV3large/expanded_conv_5/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#172(MobilenetV3large/expanded_conv_5/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 32], type:FLOAT32
  T#173(MobilenetV3large/re_lu_14/Relu6;MobilenetV3large/tf.__operators__.add_3/AddV2;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_5/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#174(MobilenetV3large/tf.math.multiply_3/Mul) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#175(MobilenetV3large/expanded_conv_5/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 120], type:FLOAT32
  T#176(MobilenetV3large/expanded_conv_5/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_5/project/Conv2D) shape_signature:[-1, -1, -1, 40], type:FLOAT32
  T#177(MobilenetV3large/expanded_conv_5/Add/add) shape_signature:[-1, -1, -1, 40], type:FLOAT32
  T#178(MobilenetV3large/expanded_conv_6/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_6/expand/Conv2D) shape_signature:[-1, -1, -1, 240], type:FLOAT32
  T#179(MobilenetV3large/multiply_1/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_15/Relu6;MobilenetV3large/tf.__operators__.add_4/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_4/Mul) shape_signature:[-1, -1, -1, 240], type:FLOAT32
  T#180(MobilenetV3large/expanded_conv_6/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 240], type:FLOAT32
  T#181(MobilenetV3large/expanded_conv_6/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_6/depthwise/depthwise) shape_signature:[-1, -1, -1, 240], type:FLOAT32
  T#182(MobilenetV3large/multiply_2/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_16/Relu6;MobilenetV3large/tf.__operators__.add_5/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_5/Mul) shape_signature:[-1, -1, -1, 240], type:FLOAT32
  T#183(MobilenetV3large/expanded_conv_6/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_6/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#184(MobilenetV3large/expanded_conv_7/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise;MobilenetV3large/expanded_conv_7/expand/Conv2D) shape_signature:[-1, -1, -1, 200], type:FLOAT32
  T#185(MobilenetV3large/multiply_3/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_17/Relu6;MobilenetV3large/tf.__operators__.add_6/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_6/Mul) shape_signature:[-1, -1, -1, 200], type:FLOAT32
  T#186(MobilenetV3large/expanded_conv_7/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_7/depthwise/depthwise1) shape_signature:[-1, -1, -1, 200], type:FLOAT32
  T#187(MobilenetV3large/multiply_4/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_18/Relu6;MobilenetV3large/tf.__operators__.add_7/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_7/Mul) shape_signature:[-1, -1, -1, 200], type:FLOAT32
  T#188(MobilenetV3large/expanded_conv_7/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_7/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#189(MobilenetV3large/expanded_conv_7/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#190(MobilenetV3large/expanded_conv_8/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_8/expand/Conv2D) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#191(MobilenetV3large/multiply_5/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_19/Relu6;MobilenetV3large/tf.__operators__.add_8/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_8/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#192(MobilenetV3large/expanded_conv_8/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_8/depthwise/depthwise) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#193(MobilenetV3large/multiply_6/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_20/Relu6;MobilenetV3large/tf.__operators__.add_9/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_9/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#194(MobilenetV3large/expanded_conv_8/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D;MobilenetV3large/expanded_conv_8/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#195(MobilenetV3large/expanded_conv_8/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#196(MobilenetV3large/expanded_conv_9/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise;MobilenetV3large/expanded_conv_9/expand/Conv2D) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#197(MobilenetV3large/multiply_7/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_21/Relu6;MobilenetV3large/tf.__operators__.add_10/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_10/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#198(MobilenetV3large/expanded_conv_9/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/depthwise/depthwise1) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#199(MobilenetV3large/multiply_8/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_22/Relu6;MobilenetV3large/tf.__operators__.add_11/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_11/Mul) shape_signature:[-1, -1, -1, 184], type:FLOAT32
  T#200(MobilenetV3large/expanded_conv_9/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_9/project/Conv2D) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#201(MobilenetV3large/expanded_conv_9/Add/add) shape_signature:[-1, -1, -1, 80], type:FLOAT32
  T#202(MobilenetV3large/expanded_conv_10/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_10/expand/Conv2D) shape_signature:[-1, -1, -1, 480], type:FLOAT32
  T#203(MobilenetV3large/multiply_9/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_23/Relu6;MobilenetV3large/tf.__operators__.add_12/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_12/Mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32
  T#204(MobilenetV3large/expanded_conv_10/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_10/depthwise/depthwise;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D1) shape_signature:[-1, -1, -1, 480], type:FLOAT32
  T#205(MobilenetV3large/multiply_10/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_24/Relu6;MobilenetV3large/tf.__operators__.add_13/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_13/Mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32
  T#206(MobilenetV3large/expanded_conv_10/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 480], type:FLOAT32
  T#207(MobilenetV3large/expanded_conv_10/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 120], type:FLOAT32
  T#208(MobilenetV3large/re_lu_25/Relu6;MobilenetV3large/tf.__operators__.add_14/AddV2;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_10/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 480], type:FLOAT32
  T#209(MobilenetV3large/tf.math.multiply_14/Mul) shape_signature:[-1, 1, 1, 480], type:FLOAT32
  T#210(MobilenetV3large/expanded_conv_10/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 480], type:FLOAT32
  T#211(MobilenetV3large/expanded_conv_10/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/project/Conv2D;MobilenetV3large/expanded_conv_10/project/Conv2D) shape_signature:[-1, -1, -1, 112], type:FLOAT32
  T#212(MobilenetV3large/expanded_conv_11/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/expand/Conv2D) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#213(MobilenetV3large/multiply_11/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_26/Relu6;MobilenetV3large/tf.__operators__.add_15/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_15/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#214(MobilenetV3large/expanded_conv_11/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/depthwise/depthwise) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#215(MobilenetV3large/multiply_12/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_27/Relu6;MobilenetV3large/tf.__operators__.add_16/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_16/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#216(MobilenetV3large/expanded_conv_11/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#217(MobilenetV3large/expanded_conv_11/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 168], type:FLOAT32
  T#218(MobilenetV3large/re_lu_28/Relu6;MobilenetV3large/tf.__operators__.add_17/AddV2;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_11/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#219(MobilenetV3large/tf.math.multiply_17/Mul) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#220(MobilenetV3large/expanded_conv_11/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#221(MobilenetV3large/expanded_conv_11/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_11/project/Conv2D) shape_signature:[-1, -1, -1, 112], type:FLOAT32
  T#222(MobilenetV3large/expanded_conv_11/Add/add) shape_signature:[-1, -1, -1, 112], type:FLOAT32
  T#223(MobilenetV3large/expanded_conv_12/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/expanded_conv_12/expand/Conv2D) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#224(MobilenetV3large/multiply_13/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_29/Relu6;MobilenetV3large/tf.__operators__.add_18/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_18/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#225(MobilenetV3large/expanded_conv_12/depthwise/pad/Pad) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#226(MobilenetV3large/expanded_conv_12/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_12/depthwise/depthwise;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D1) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#227(MobilenetV3large/multiply_14/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_30/Relu6;MobilenetV3large/tf.__operators__.add_19/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_19/Mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#228(MobilenetV3large/expanded_conv_12/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#229(MobilenetV3large/expanded_conv_12/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 168], type:FLOAT32
  T#230(MobilenetV3large/re_lu_31/Relu6;MobilenetV3large/tf.__operators__.add_20/AddV2;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/expanded_conv_12/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#231(MobilenetV3large/tf.math.multiply_20/Mul) shape_signature:[-1, 1, 1, 672], type:FLOAT32
  T#232(MobilenetV3large/expanded_conv_12/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 672], type:FLOAT32
  T#233(MobilenetV3large/expanded_conv_12/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D;MobilenetV3large/expanded_conv_12/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32
  T#234(MobilenetV3large/expanded_conv_13/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/expand/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#235(MobilenetV3large/multiply_15/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_32/Relu6;MobilenetV3large/tf.__operators__.add_21/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_21/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#236(MobilenetV3large/expanded_conv_13/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/depthwise/depthwise) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#237(MobilenetV3large/multiply_16/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_33/Relu6;MobilenetV3large/tf.__operators__.add_22/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_22/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#238(MobilenetV3large/expanded_conv_13/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#239(MobilenetV3large/expanded_conv_13/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 240], type:FLOAT32
  T#240(MobilenetV3large/re_lu_34/Relu6;MobilenetV3large/tf.__operators__.add_23/AddV2;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_13/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#241(MobilenetV3large/tf.math.multiply_23/Mul) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#242(MobilenetV3large/expanded_conv_13/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#243(MobilenetV3large/expanded_conv_13/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D;MobilenetV3large/expanded_conv_13/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32
  T#244(MobilenetV3large/expanded_conv_13/Add/add) shape_signature:[-1, -1, -1, 160], type:FLOAT32
  T#245(MobilenetV3large/expanded_conv_14/expand/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/expand/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#246(MobilenetV3large/multiply_17/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_35/Relu6;MobilenetV3large/tf.__operators__.add_24/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_24/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#247(MobilenetV3large/expanded_conv_14/depthwise/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/depthwise/depthwise) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#248(MobilenetV3large/multiply_18/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_36/Relu6;MobilenetV3large/tf.__operators__.add_25/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_25/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#249(MobilenetV3large/expanded_conv_14/squeeze_excite/AvgPool/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#250(MobilenetV3large/expanded_conv_14/squeeze_excite/Relu/Relu;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 240], type:FLOAT32
  T#251(MobilenetV3large/re_lu_37/Relu6;MobilenetV3large/tf.__operators__.add_26/AddV2;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd/ReadVariableOp;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/BiasAdd;MobilenetV3large/Conv_1/Conv2D;MobilenetV3large/expanded_conv_14/squeeze_excite/Conv_1/Conv2D;MobilenetV3large/tf.__operators__.add/y1) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#252(MobilenetV3large/tf.math.multiply_26/Mul) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#253(MobilenetV3large/expanded_conv_14/squeeze_excite/Mul/mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#254(MobilenetV3large/expanded_conv_14/project/BatchNorm/FusedBatchNormV3;MobilenetV3large/expanded_conv_14/project/Conv2D) shape_signature:[-1, -1, -1, 160], type:FLOAT32
  T#255(MobilenetV3large/expanded_conv_14/Add/add) shape_signature:[-1, -1, -1, 160], type:FLOAT32
  T#256(MobilenetV3large/Conv_1/BatchNorm/FusedBatchNormV3;MobilenetV3large/Conv_1/Conv2D) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#257(MobilenetV3large/multiply_19/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_38/Relu6;MobilenetV3large/tf.__operators__.add_27/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_27/Mul) shape_signature:[-1, -1, -1, 960], type:FLOAT32
  T#258(MobilenetV3large/global_average_pooling2d/Mean) shape_signature:[-1, 1, 1, 960], type:FLOAT32
  T#259(MobilenetV3large/Conv_2/BiasAdd;MobilenetV3large/Conv_2/Conv2D;MobilenetV3large/Conv_2/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 1280], type:FLOAT32
  T#260(MobilenetV3large/multiply_20/mul;MobilenetV3large/tf.__operators__.add/y;MobilenetV3large/re_lu_39/Relu6;MobilenetV3large/tf.__operators__.add_28/AddV2;MobilenetV3large/tf.math.multiply/Mul/y;MobilenetV3large/tf.math.multiply_28/Mul) shape_signature:[-1, 1, 1, 1280], type:FLOAT32
  T#261(MobilenetV3large/Logits/BiasAdd;MobilenetV3large/Logits/Conv2D;MobilenetV3large/Logits/BiasAdd/ReadVariableOp) shape_signature:[-1, 1, 1, 1000], type:FLOAT32
  T#262(MobilenetV3large/flatten_1/Reshape) shape_signature:[-1, 1000], type:FLOAT32
  T#263(StatefulPartitionedCall:0) shape_signature:[-1, 1000], type:FLOAT32

---------------------------------------------------------------
Your TFLite model has '1' signature_def(s).

Signature#0 key: 'serving_default'

- Subgraph: Subgraph#0
- Inputs: 
    'input_1' : T#0
- Outputs: 
    'Predictions' : T#263

---------------------------------------------------------------
              Model size:   21944024 bytes
    Non-data buffer size:      60500 bytes (00.28 %)
  Total data buffer size:   21883524 bytes (99.72 %)
    (Zero value buffers):          0 bytes (00.00 %)

* Buffers of TFLite model are mostly used for constant tensors.
  And zero value buffers are buffers filled with zeros.
  Non-data buffers area are used to store operators, subgraphs and etc.
  You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs

检查 GPU 委托兼容性

Model Analyzer API 通过提供 gpu_compatibility=True 选项,提供了一种检查给定模型的 GPU 委托兼容性的方法。

第 1 种情况:当模型不兼容时

以下代码展示了将 gpu_compatibility=True 选项用于简单 tf.function 的方式,该函数使用与 GPU 委托不兼容的带有二维张量的 tf.slicetf.cosh

对于每个存在兼容性问题的节点,您将看到 GPU COMPATIBILITY WARNING

import tensorflow as tf

@tf.function(input_signature=[
    tf.TensorSpec(shape=[4, 4], dtype=tf.float32)
])
def func(x):
  return tf.cosh(x) + tf.slice(x, [1, 1], [1, 1])

converter = tf.lite.TFLiteConverter.from_concrete_functions(
    [func.get_concrete_function()], func)
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,
    tf.lite.OpsSet.SELECT_TF_OPS,
]
fb_model = converter.convert()

tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)
=== TFLite ModelAnalyzer ===

Your TFLite model has '1' subgraph(s). In the subgraph description below,
T# represents the Tensor numbers. For example, in Subgraph#0, the FlexCosh op takes
tensor #0 as input and produces tensor #2 as output.

Subgraph#0 main(T#0) -> [T#4]
  Op#0 FlexCosh(T#0) -> [T#2]
GPU COMPATIBILITY WARNING: Not supported custom op FlexCosh
  Op#1 SLICE(T#0, T#1[1, 1], T#1[1, 1]) -> [T#3]
GPU COMPATIBILITY WARNING: SLICE supports for 3 or 4 dimensional tensors only, but node has 2 dimensional tensors.
  Op#2 ADD(T#2, T#3) -> [T#4]

GPU COMPATIBILITY WARNING: Subgraph#0 has GPU delegate compatibility issues at nodes 0, 1 on TFLite runtime version 2.15.0-rc1

Tensors of Subgraph#0
  T#0(x) shape:[4, 4], type:FLOAT32
  T#1(Slice/begin) shape:[2], type:INT32 RO 8 bytes, buffer: 2, data:[1, 1]
  T#2(Cosh) shape:[4, 4], type:FLOAT32
  T#3(Slice) shape:[1, 1], type:FLOAT32
  T#4(Identity) shape:[4, 4], type:FLOAT32

---------------------------------------------------------------
              Model size:       1128 bytes
    Non-data buffer size:       1008 bytes (89.36 %)
  Total data buffer size:        120 bytes (10.64 %)
    (Zero value buffers):          0 bytes (00.00 %)

* Buffers of TFLite model are mostly used for constant tensors.
  And zero value buffers are buffers filled with zeros.
  Non-data buffers area are used to store operators, subgraphs and etc.
  You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs
2023-11-07 21:46:15.325771: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2023-11-07 21:46:15.325815: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------

 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 2, Total Ops 7, % non-converted = 28.57 %
 * 1 ARITH ops, 1 TF ops

- arith.constant:    1 occurrences  (i32: 1)



- tf.Cosh:    1 occurrences  (f32: 1)
  (f32: 1)
  (f32: 1)
2023-11-07 21:46:15.348434: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2921] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexCosh
Details:
    tf.Cosh(tensor<4x4xf32>) -> (tensor<4x4xf32>) : {device = ""}
See instructions: https://www.tensorflow.org/lite/guide/ops_select

第 2 种情况:当模型兼容时

在本示例中,给定的模型与 GPU 委托兼容。

:即使该工具没有发现任何兼容性问题,它也不能保证您的模型在每台设备上都能很好地使用 GPU 委托。可能会发生一些运行时不兼容的情况,例如目标 OpenGL 后端缺少 CL_DEVICE_IMAGE_SUPPORT 功能。

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(128, 128)),
  tf.keras.layers.Dense(256, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

fb_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()

tf.lite.experimental.Analyzer.analyze(model_content=fb_model, gpu_compatibility=True)
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp6ddblo6n/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tmp6ddblo6n/assets
=== TFLite ModelAnalyzer ===

Your TFLite model has '1' subgraph(s). In the subgraph description below,
T# represents the Tensor numbers. For example, in Subgraph#0, the RESHAPE op takes
tensor #0 and tensor #1 as input and produces tensor #4 as output.

Subgraph#0 main(T#0) -> [T#6]
  Op#0 RESHAPE(T#0, T#1[-1, 16384]) -> [T#4]
  Op#1 FULLY_CONNECTED(T#4, T#2, T#-1) -> [T#5]
  Op#2 FULLY_CONNECTED(T#5, T#3, T#-1) -> [T#6]

Tensors of Subgraph#0
  T#0(serving_default_flatten_2_input:0) shape_signature:[-1, 128, 128], type:FLOAT32
  T#1(sequential_1/flatten_2/Const) shape:[2], type:INT32 RO 8 bytes, buffer: 2, data:[-1, 16384]
  T#2(sequential_1/dense_2/MatMul1) shape:[256, 16384], type:FLOAT32 RO 16777216 bytes, buffer: 3, data:[0.0029392, -0.0126297, 0.0039341, 0.00753378, 0.00489575, ...]
  T#3(sequential_1/dense_3/MatMul) shape:[10, 256], type:FLOAT32 RO 10240 bytes, buffer: 4, data:[0.13594, 0.132244, -0.109182, -0.140199, -0.109101, ...]
  T#4(sequential_1/flatten_2/Reshape) shape_signature:[-1, 16384], type:FLOAT32
  T#5(sequential_1/dense_2/MatMul;sequential_1/dense_2/Relu;sequential_1/dense_2/BiasAdd) shape_signature:[-1, 256], type:FLOAT32
  T#6(StatefulPartitionedCall:0) shape_signature:[-1, 10], type:FLOAT32


Your model looks compatible with GPU delegate on TFLite runtime version 2.15.0-rc1.
This does not guarantee that your model will work well with GPU delegate because there could still be runtime incompatibililties.
---------------------------------------------------------------
Your TFLite model has '1' signature_def(s).

Signature#0 key: 'serving_default'

- Subgraph: Subgraph#0
- Inputs: 
    'flatten_2_input' : T#0
- Outputs: 
    'dense_3' : T#6

---------------------------------------------------------------
              Model size:   16789072 bytes
    Non-data buffer size:       1504 bytes (00.01 %)
  Total data buffer size:   16787568 bytes (99.99 %)
    (Zero value buffers):          0 bytes (00.00 %)

* Buffers of TFLite model are mostly used for constant tensors.
  And zero value buffers are buffers filled with zeros.
  Non-data buffers area are used to store operators, subgraphs and etc.
  You can find more details from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/schema/schema.fbs
2023-11-07 21:46:16.002148: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2023-11-07 21:46:16.002194: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------

 * Accepted dialects: tfl, builtin, func
 * Non-Converted Ops: 3, Total Ops 10, % non-converted = 30.00 %
 * 3 ARITH ops

- arith.constant:    3 occurrences  (f32: 2, i32: 1)



  (f32: 2)

  (f32: 1)