Lihat di TensorFlow.org | Jalankan di Google Colab | Lihat sumber di GitHub | Unduh buku catatan |
Dalam tutorial ini, kita membangun sebuah model matriks faktorisasi sederhana dengan menggunakan dataset MovieLens 100K dengan TFRS. Kami dapat menggunakan model ini untuk merekomendasikan film untuk pengguna tertentu.
Impor TFRS
Pertama, instal dan impor TFRS:
pip install -q tensorflow-recommenders
pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
Baca datanya
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
2021-10-02 12:07:32.719766: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Bangun kosakata untuk mengonversi ID pengguna dan judul film menjadi indeks bilangan bulat untuk menyematkan lapisan:
user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
Tentukan model
Kita bisa menentukan model TFRS dengan mewarisi dari tfrs.Model
dan melaksanakan compute_loss
metode:
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
Tentukan dua model dan tugas pengambilan.
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size. WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size. WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size. WARNING:tensorflow:vocab_size is deprecated, please use vocabulary_size.
Sesuaikan dan evaluasi.
Buat model, latih, dan buat prediksi:
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index_from_dataset(
movies.batch(100).map(lambda title: (title, model.movie_model(title))))
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
Epoch 1/3 25/25 [==============================] - 6s 194ms/step - factorized_top_k/top_1_categorical_accuracy: 3.0000e-05 - factorized_top_k/top_5_categorical_accuracy: 0.0016 - factorized_top_k/top_10_categorical_accuracy: 0.0052 - factorized_top_k/top_50_categorical_accuracy: 0.0442 - factorized_top_k/top_100_categorical_accuracy: 0.1010 - loss: 33092.9163 - regularization_loss: 0.0000e+00 - total_loss: 33092.9163 Epoch 2/3 25/25 [==============================] - 5s 194ms/step - factorized_top_k/top_1_categorical_accuracy: 1.7000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0052 - factorized_top_k/top_10_categorical_accuracy: 0.0148 - factorized_top_k/top_50_categorical_accuracy: 0.1054 - factorized_top_k/top_100_categorical_accuracy: 0.2114 - loss: 31008.8447 - regularization_loss: 0.0000e+00 - total_loss: 31008.8447 Epoch 3/3 25/25 [==============================] - 5s 193ms/step - factorized_top_k/top_1_categorical_accuracy: 3.4000e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0086 - factorized_top_k/top_10_categorical_accuracy: 0.0222 - factorized_top_k/top_50_categorical_accuracy: 0.1438 - factorized_top_k/top_100_categorical_accuracy: 0.2694 - loss: 30417.8776 - regularization_loss: 0.0000e+00 - total_loss: 30417.8776 Top 3 recommendations for user 42: [b'Rent-a-Kid (1995)' b'Just Cause (1995)' b'Land Before Time III: The Time of the Great Giving (1995) (V)']