在 Tensorflow.org 上查看 | 在 Google Colab 运行 | 在 Github 上查看源代码 | 下载笔记本 |
本教程提供了一个将数据从 NumPy 数组加载到 tf.data.Dataset
中的示例。
此示例从 .npz
文件加载 MNIST 数据集。但是,NumPy 数组的来源并不重要。
安装
import numpy as np
import tensorflow as tf
2023-11-07 23:42:25.777167: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-11-07 23:42:25.777219: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-11-07 23:42:25.778871: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
从 .npz
文件中加载
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
使用 tf.data.Dataset
加载 NumPy 数组
假设您有一个示例数组和相应的标签数组,请将两个数组作为元组传递给 tf.data.Dataset.from_tensor_slices
以创建 tf.data.Dataset
。
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
使用该数据集
打乱和批次化数据集
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
建立和训练模型
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
Epoch 1/10 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1699400551.386482 565637 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 938/938 [==============================] - 3s 2ms/step - loss: 3.4062 - sparse_categorical_accuracy: 0.8822 Epoch 2/10 938/938 [==============================] - 2s 2ms/step - loss: 0.5470 - sparse_categorical_accuracy: 0.9261 Epoch 3/10 938/938 [==============================] - 2s 2ms/step - loss: 0.3775 - sparse_categorical_accuracy: 0.9453 Epoch 4/10 938/938 [==============================] - 2s 2ms/step - loss: 0.3061 - sparse_categorical_accuracy: 0.9548 Epoch 5/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2673 - sparse_categorical_accuracy: 0.9608 Epoch 6/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2428 - sparse_categorical_accuracy: 0.9658 Epoch 7/10 938/938 [==============================] - 2s 2ms/step - loss: 0.2212 - sparse_categorical_accuracy: 0.9678 Epoch 8/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1993 - sparse_categorical_accuracy: 0.9721 Epoch 9/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1884 - sparse_categorical_accuracy: 0.9740 Epoch 10/10 938/938 [==============================] - 2s 2ms/step - loss: 0.1728 - sparse_categorical_accuracy: 0.9757 <keras.src.callbacks.History at 0x7fd6d0573220>
model.evaluate(test_dataset)
157/157 [==============================] - 0s 2ms/step - loss: 0.6032 - sparse_categorical_accuracy: 0.9577 [0.6031830310821533, 0.9577000141143799]