New to machine learning? Watch a video course to get practical working knowledge of ML using web technologies
View series
What is transfer learning?
Stay organized with collections
Save and categorize content based on your preferences.
Sophisticated deep learning models have millions of parameters (weights), and
training them from scratch often requires large amounts of data and computing
resources. Transfer learning is a technique that shortcuts much of this by
taking a piece of a model that has already been trained on a related task and
reusing it in a new model.
For example, the next tutorial in this section will show you how to build your
own image recognizer that takes advantage of a model that was already trained to
recognize 1000s of different kinds of objects within images. You can adapt the
existing knowledge in the pre-trained model to detect your own image classes
using much less training data than the original model required.
This is useful for rapidly developing new models as well as customizing models
in resource-constrained environments like browsers and mobile devices.
Most often when doing transfer learning, we don't adjust the weights of the
original model. Instead we remove the final layer and train a new (often fairly
shallow) model on top of the output of the truncated model. This is the
technique you will see demonstrated in the tutorials in this section:
For an additional example of transfer learning using TensorFlow.js, see
Use a pre-trained model.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-05-26 UTC.
[null,null,["Last updated 2023-05-26 UTC."],[],[],null,["# What is transfer learning?\n\n\u003cbr /\u003e\n\nSophisticated deep learning models have millions of parameters (weights), and\ntraining them from scratch often requires large amounts of data and computing\nresources. Transfer learning is a technique that shortcuts much of this by\ntaking a piece of a model that has already been trained on a related task and\nreusing it in a new model.\n\nFor example, the next tutorial in this section will show you how to build your\nown image recognizer that takes advantage of a model that was already trained to\nrecognize 1000s of different kinds of objects within images. You can adapt the\nexisting knowledge in the pre-trained model to detect your own image classes\nusing much less training data than the original model required.\n\nThis is useful for rapidly developing new models as well as customizing models\nin resource-constrained environments like browsers and mobile devices.\n\nMost often when doing transfer learning, we don't adjust the weights of the\noriginal model. Instead we remove the final layer and train a new (often fairly\nshallow) model on top of the output of the truncated model. This is the\ntechnique you will see demonstrated in the tutorials in this section:\n\n- [Build a transfer-learning based image classifier](/js/tutorials/transfer/image_classification)\n- [Build a transfer-learning based audio recognizer](/js/tutorials/transfer/audio_recognizer)\n\nFor an additional example of transfer learning using TensorFlow.js, see\n[Use a pre-trained model](/js/tutorials/conversion/pretrained_model)."]]