模型托管协议
使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
本文档介绍了在 tfhub.dev 上托管所有模型类型(TFJS、TF Lite 和 TensorFlow 模型)时使用的网址惯例。此外,本文档还介绍了由 tensorflow_hub
库实现的基于 HTTP(S) 的协议,目的是将 tfhub.dev 中的 TensorFlow 模型和兼容服务加载到 TensorFlow 程序中。
它的关键功能是在代码中使用相同的网址来加载模型,并在浏览器中使用相同的网址来查看模型文档。
通用网址惯例
tfhub.dev 支持以下网址格式:
- TF Hub 发布者遵循
<a href="https://tfhub.dev/">https://tfhub.dev/</a><publisher>
- TF Hub 集合遵循
<a href="https://tfhub.dev/">https://tfhub.dev/</a><publisher>/collection/<collection_name>
- TF Hub 模型具有版本化网址
<a href="https://tfhub.dev/">https://tfhub.dev/</a><publisher>/<model_name>/<version>
和可解析为最新版本模型的未版本化网址 <a href="https://tfhub.dev/">https://tfhub.dev/</a><publisher>/<model_name>
。
通过将网址参数附加到 tfhub.dev 模型网址,可以将 TF Hub 模型下载为压缩资源。但是,实现该目标所需的网址参数取决于模型类型:
- TensorFlow 模型(SavedModel 和 TF1 Hub 格式):将
?tf-hub-format=compressed
附加到 TensorFlow 模型网址。
- TFJS 模型:将
?tfjs-format=compressed
附加到 TFJS 模型网址以下载压缩资源,或者附加 /model.json?tfjs-format=file
以便从远程存储空间读取。
- TF Lite 模型:将
?lite-format=tflite
附加到 TF Lite 模型网址。
例如:
此外,某些模型还以可直接从远程存储空间读取而无需下载的格式托管。如果没有可用的本地存储空间,例如在浏览器中运行 TF.js 模型或在 Colab 上加载 SavedModel,则此功能特别有用。请注意,读取远程托管而不在本地下载的模型可能会增加延迟。
tensorflow_hub 库协议
本部分介绍如何在 tfhub.dev 上托管模型以与 tensorflow_hub 库一起使用。如果您想托管自己的模型仓库以使用 tensorflow_hub 库,则您的 HTTP 分发服务应提供此协议的实现。
请注意,本部分不会介绍如何托管 TF Lite 和 TFJS 模型,因为它们不通过 tensorflow_hub
库下载。有关托管这些模型类型的详细信息,请参阅上文。
压缩托管
模型以压缩的 tar.gz 文件形式存储在 tfhub.dev 上。默认情况下,tensorflow_hub 库会自动下载压缩模型。此外,也可以通过将 ?tf-hub-format=compressed
附加到模型网址来手动下载它们,例如:
wget https://tfhub.dev/tensorflow/albert_en_xxlarge/1?tf-hub-format=compressed
归档的根是模型目录的根,并且应包含 SavedModel,如以下示例所示:
# Create a compressed model from a SavedModel directory.
$ tar -cz -f model.tar.gz --owner=0 --group=0 -C /tmp/export-model/ .
# Inspect files inside a compressed model
$ tar -tf model.tar.gz
./
./variables/
./variables/variables.data-00000-of-00001
./variables/variables.index
./assets/
./saved_model.pb
与旧版 TF1 Hub 格式一起使用的 Tarball 还会包含一个 ./tfhub_module.pb
文件。
当调用 tensorflow_hub
库模型加载 API 之一(hub.KerasLayer、hub.load 等)时,库会下载模型,解压缩模型并将其在本地缓存。tensorflow_hub
库期望模型网址进行版本化,并且给定版本的模型内容是不可变的,以便可以无限期地对其进行缓存。详细了解缓存模型。

未压缩托管
当环境变量 TFHUB_MODEL_LOAD_FORMAT
或命令行标志 --tfhub_model_load_format
设置为 UNCOMPRESSED
时,会直接从远程存储空间 (GCS) 读取模型,而不是在本地下载和解压缩模型。启用此行为后,库会将 ?tf-hub-format=uncompressed
附加到模型网址。该请求将返回 GCS 上包含未压缩模型文件的文件夹的路径。举例来说,
<a href="https://tfhub.dev/google/spice/2?tf-hub-format=uncompressed">https://tfhub.dev/google/spice/2?tf-hub-format=uncompressed</a>
会在 303 响应的正文中返回
gs://tfhub-modules/google/spice/2/uncompressed
。随后,库从该 GCS 目标读取模型。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2024-01-11。
[null,null,["最后更新时间 (UTC):2024-01-11。"],[],[],null,["# Model hosting protocol\n\nThis document describes the URL conventions used when hosting all model types on\n[tfhub.dev](https://tfhub.dev) - TFJS, TF Lite and TensorFlow models. It also\ndescribes the HTTP(S)-based protocol implemented by the `tensorflow_hub` library\nin order to load TensorFlow models from [tfhub.dev](https://tfhub.dev) and\ncompatible services into TensorFlow programs.\n\nIts key feature is to use the same URL in code to load a model and in a browser\nto view the model documentation.\n\nGeneral URL conventions\n-----------------------\n\n[tfhub.dev](https://tfhub.dev) supports the following URL formats:\n\n- TF Hub publishers follow `\u003ca href=\"https://tfhub.dev/\"\u003ehttps://tfhub.dev/\u003c/a\u003e\u003cpublisher\u003e`\n- TF Hub collections follow `\u003ca href=\"https://tfhub.dev/\"\u003ehttps://tfhub.dev/\u003c/a\u003e\u003cpublisher\u003e/collection/\u003ccollection_name\u003e`\n- TF Hub models have versioned url `\u003ca href=\"https://tfhub.dev/\"\u003ehttps://tfhub.dev/\u003c/a\u003e\u003cpublisher\u003e/\u003cmodel_name\u003e/\u003cversion\u003e` and unversioned url `\u003ca href=\"https://tfhub.dev/\"\u003ehttps://tfhub.dev/\u003c/a\u003e\u003cpublisher\u003e/\u003cmodel_name\u003e` that resolves to the latest version of the model.\n\nTF Hub models can be downloaded as compressed assets by appending URL parameters\nto the [tfhub.dev](https://tfhub.dev) model URL. However, the URL parameters\nrequired to achieve that depend on the model type:\n\n- TensorFlow models (both SavedModel and TF1 Hub formats): append `?tf-hub-format=compressed` to the TensorFlow model url.\n- TFJS models: append `?tfjs-format=compressed` to the TFJS model url to download the compressed or `/model.json?tfjs-format=file` to read if from remote storage.\n- TF lite models: append `?lite-format=tflite` to the TF Lite model url.\n\nFor example:\n\n|-----------------------------------------|---------------------------------------------------------|---------------|---------------------------|--------------------------------------------------------------------------------|\n| Type | Model URL | Download type | URL param | Download URL |\n| TensorFlow (SavedModel, TF1 Hub format) | \u003chttps://tfhub.dev/google/spice/2\u003e | .tar.gz | ?tf-hub-format=compressed | \u003chttps://tfhub.dev/google/spice/2?tf-hub-format=compressed\u003e |\n| TF Lite | \u003chttps://tfhub.dev/google/lite-model/spice/1\u003e | .tflite | ?lite-format=tflite | \u003chttps://tfhub.dev/google/lite-model/spice/1?lite-format=tflite\u003e |\n| TF.js | \u003chttps://tfhub.dev/google/tfjs-model/spice/2/default/1\u003e | .tar.gz | ?tfjs-format=compressed | \u003chttps://tfhub.dev/google/tfjs-model/spice/2/default/1?tfjs-format=compressed\u003e |\n\nAdditionally, some models also are hosted in a format that can be read directly\nfrom remote storage without being downloaded. This is especially useful if there\nis no local storage available, such as running a TF.js model in the browser or\nloading a SavedModel on [Colab](https://colab.research.google.com/). Be\nconscious that reading models that are hosted remotely without being downloaded\nlocally may increase latency.\n\n|-----------------------------------------|---------------------------------------------------------|--------------------------------------------------------------------|-----------------------------|-------------------------------------------------------------------------------------|\n| Type | Model URL | Response type | URL param | Request URL |\n| TensorFlow (SavedModel, TF1 Hub format) | \u003chttps://tfhub.dev/google/spice/2\u003e | String (Path to GCS folder where the uncompressed model is stored) | ?tf-hub-format=uncompressed | \u003chttps://tfhub.dev/google/spice/2?tf-hub-format=uncompressed\u003e |\n| TF.js | \u003chttps://tfhub.dev/google/tfjs-model/spice/2/default/1\u003e | .json | ?tfjs-format=file | \u003chttps://tfhub.dev/google/tfjs-model/spice/2/default/1/model.json?tfjs-format=file\u003e |\n\ntensorflow_hub library protocol\n-------------------------------\n\nThis section describes how we host models on [tfhub.dev](https://tfhub.dev) for\nuse with the tensorflow_hub library. If you want to host your own model\nrepository to work with the tensorflow_hub library, your HTTP(s) distribution\nservice should provide an implementation of this protocol.\n\nNote that this section does not address hosting TF Lite and TFJS models since\nthey are not downloaded via the `tensorflow_hub` library. For more information\non hosting these model types, please check [above](#general_url_conventions).\n\n### Compressed Hosting\n\nModels are stored on [tfhub.dev](https://tfhub.dev) as compressed tar.gz files.\nBy default, the tensorflow_hub library automatically downloads the compressed\nmodel. They can also be manually downloaded by appending the\n`?tf-hub-format=compressed` to the model url, for example: \n\n wget https://tfhub.dev/tensorflow/albert_en_xxlarge/1?tf-hub-format=compressed\n\nThe root of the archive is the root of the model directory and should contain a\nSavedModel, as in this example: \n\n # Create a compressed model from a SavedModel directory.\n $ tar -cz -f model.tar.gz --owner=0 --group=0 -C /tmp/export-model/ .\n\n # Inspect files inside a compressed model\n $ tar -tf model.tar.gz\n ./\n ./variables/\n ./variables/variables.data-00000-of-00001\n ./variables/variables.index\n ./assets/\n ./saved_model.pb\n\nTarballs for use with the legacy\n[TF1 Hub format](https://www.tensorflow.org/hub/tf1_hub_module) will also\ncontain a `./tfhub_module.pb` file.\n\nWhen one of `tensorflow_hub` library model loading APIs is invoked\n([hub.KerasLayer](https://www.tensorflow.org/hub/api_docs/python/hub/KerasLayer),\n[hub.load](https://www.tensorflow.org/hub/api_docs/python/hub/load), etc) the\nlibrary downloads the model, uncompresses the model and caches it locally. The\n`tensorflow_hub` library expects that model URLs are versioned and that the\nmodel content of a given version is immutable, so that it can be cached\nindefinitely. Learn more about [caching models](/hub/caching).\n\n### Uncompressed Hosting\n\nWhen the environment variable `TFHUB_MODEL_LOAD_FORMAT` or the command-line flag\n`--tfhub_model_load_format` is set to `UNCOMPRESSED`, the model is read directly\nfrom remote storage (GCS) instead of being downloaded and uncompressed locally.\nWhen this behavior is enabled the library appends `?tf-hub-format=uncompressed`\nto the model URL. That request returns the path to the folder on GCS that\ncontains the uncompressed model files. As an example, \n\n`\u003ca href=\"https://tfhub.dev/google/spice/2?tf-hub-format=uncompressed\"\u003ehttps://tfhub.dev/google/spice/2?tf-hub-format=uncompressed\u003c/a\u003e` \n\nreturns \n\n`gs://tfhub-modules/google/spice/2/uncompressed` in the body of the 303\nresponse. The library then reads the model from that GCS destination."]]