ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

使用 Keras 预处理层对结构化数据进行分类

在 TensorFlow.org 上查看 在 Google Colab 中运行 在 GitHub 上查看源代码 下载笔记本

本教程演示了如何对结构化数据(例如 CSV 中的表格数据)进行分类。您将使用 Keras 定义模型,并使用预处理层作为桥梁,将 CSV 中的列映射到用于训练模型的特征。本教程包含以下操作的完整代码:

  • 使用 Pandas 加载 CSV 文件。
  • 构建输入流水线以使用 tf.data 对行进行批处理和乱序。
  • 使用 Keras 预处理层将 CSV 中的列映射到用于训练模型的特征。
  • 使用 Keras 构建、训练和评估模型。

注:本教程类似于使用特征列对结构化数据进行分类。此版本使用新的实验性 Keras 预处理层而不是 tf.feature_column。Keras 预处理层更直观,可以轻松包含在模型中以简化部署。

数据集

您将使用 PetFinder 数据集的简化版本。CSV 中有几千行。每行描述一个宠物,每列描述一个特性。您将使用此信息来预测宠物是否会被领养。

以下是对该数据集的描述。请注意,其中既有数值列,也有分类列。还有一个您不会在本教程中用到的自由文本列。

描述 特征类型 数据类型
Type 动物类型(狗、猫) 分类 字符串
Age 宠物年龄 数值 整数
Breed1 宠物的主要品种 分类 字符串
Color1 宠物的颜色 1 分类 字符串
Color2 宠物的颜色 2 分类 字符串
MaturitySize 成年个体大小 分类 字符串
FurLength 毛发长度 分类 字符串
Vaccinated 宠物已接种疫苗 分类 字符串
Sterilized 宠物已绝育 分类 字符串
Health 健康状况 分类 字符串
Fee 领养费 数值 整数
Description 关于此宠物的简介 文本 字符串
PhotoAmt 为该宠物上传的照片总数 数值 整数
AdoptionSpeed 领养速度 分类 整数

导入TensorFlow和其他库

pip install -q sklearn
import numpy as np
import pandas as pd
import tensorflow as tf

from sklearn.model_selection import train_test_split
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
tf.__version__
'2.6.0'

使用 Pandas 创建数据帧

Pandas 是一个 Python 库,其中包含许多有用的加载和处理结构化数据的实用工具。您将使用 Pandas 从 URL 下载数据集,并将其加载到数据帧中。

import pathlib

dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip'
csv_file = 'datasets/petfinder-mini/petfinder-mini.csv'

tf.keras.utils.get_file('petfinder_mini.zip', dataset_url,
                        extract=True, cache_dir='.')
dataframe = pd.read_csv(csv_file)
Downloading data from http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip
1671168/1668792 [==============================] - 0s 0us/step
1679360/1668792 [==============================] - 0s 0us/step
dataframe.head()

创建目标变量

Kaggle 比赛中的任务是预测宠物被领养的速度(例如,在第一周、第一个月、前三个月等)。我们针对教程进行一下简化。在这里,您将把它转化为一个二元分类问题,并简单地预测宠物是否被领养。

修改标签列后,0 表示宠物未被领养,1 表示宠物已被领养。

# In the original dataset "4" indicates the pet was not adopted.
dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)

# Drop un-used columns.
dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])

将数据帧拆分为训练集、验证集和测试集

您下载的数据集是单个 CSV 文件。您将把它拆分为训练集、验证集和测试集。

train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
7383 train examples
1846 validation examples
2308 test examples

使用 tf.data 创建输入流水线

接下来,您将使用 tf.data 封装数据帧,以便对数据进行乱序和批处理。如果您处理的 CSV 文件非常大(大到无法放入内存),则可以使用 tf.data 直接从磁盘读取文件。本教程中没有涉及这方面的内容。

# A utility method to create a tf.data dataset from a Pandas Dataframe
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
  dataframe = dataframe.copy()
  labels = dataframe.pop('target')
  ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
  if shuffle:
    ds = ds.shuffle(buffer_size=len(dataframe))
  ds = ds.batch(batch_size)
  ds = ds.prefetch(batch_size)
  return ds

现在您已经创建了输入流水线,我们调用它来查看它返回的数据的格式。您使用了小批次来保持输出的可读性。

batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
2021-08-13 23:53:05.227125: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.235701: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.236752: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.238716: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-08-13 23:53:05.239389: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.240349: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.241262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.865469: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.866445: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.867325: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-08-13 23:53:05.868246: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 14648 MB memory:  -> device: 0, name: Tesla V100-SXM2-16GB, pci bus id: 0000:00:05.0, compute capability: 7.0
[(train_features, label_batch)] = train_ds.take(1)
print('Every feature:', list(train_features.keys()))
print('A batch of ages:', train_features['Age'])
print('A batch of targets:', label_batch )
Every feature: ['Type', 'Age', 'Breed1', 'Gender', 'Color1', 'Color2', 'MaturitySize', 'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Fee', 'PhotoAmt']
A batch of ages: tf.Tensor([ 1  2  2 12  4], shape=(5,), dtype=int64)
A batch of targets: tf.Tensor([1 1 1 1 1], shape=(5,), dtype=int64)

您可以看到数据集(从数据帧)返回了一个列名称字典,该字典映射到来自数据帧中行的列值。

演示预处理层的使用。

Keras 预处理层 API 允许您构建 Keras 原生输入处理流水线。您将使用 3 个预处理层来演示特征预处理代码。

您可以在此处找到可用预处理层的列表。

数值列

对于每个数值特征,您将使用 Normalization() 层来确保每个特征的平均值为 0,且其标准差为 1。

get_normalization_layer 函数返回一个层,该层将特征归一化应用于数值特征。

def get_normalization_layer(name, dataset):
  # Create a Normalization layer for our feature.
  normalizer = preprocessing.Normalization(axis=None)

  # Prepare a Dataset that only yields our feature.
  feature_ds = dataset.map(lambda x, y: x[name])

  # Learn the statistics of the data.
  normalizer.adapt(feature_ds)

  return normalizer
photo_count_col = train_features['PhotoAmt']
layer = get_normalization_layer('PhotoAmt', train_ds)
layer(photo_count_col)
2021-08-13 23:53:06.311337: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
<tf.Tensor: shape=(5,), dtype=float32, numpy=
array([ 0.113803  , -0.81942016, -0.19727139, -0.19727139,  0.113803  ],
      dtype=float32)>

注:如果您有许多数值特征(数百个或更多),首先将它们连接起来并使用单个 normalization 层会更有效。

分类列

在此数据集中,Type 表示为字符串(例如 'Dog' 或 'Cat')。您不能将字符串直接馈送给模型。预处理层负责将字符串表示为独热向量。

get_category_encoding_layer 函数返回一个层,该层将值从词汇表映射到整数索引,并对特征进行独热编码。

def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
  # Create a StringLookup layer which will turn strings into integer indices
  if dtype == 'string':
    index = preprocessing.StringLookup(max_tokens=max_tokens)
  else:
    index = preprocessing.IntegerLookup(max_tokens=max_tokens)

  # Prepare a Dataset that only yields our feature
  feature_ds = dataset.map(lambda x, y: x[name])

  # Learn the set of possible values and assign them a fixed integer index.
  index.adapt(feature_ds)

  # Create a Discretization for our integer indices.
  encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())

  # Apply one-hot encoding to our indices. The lambda function captures the
  # layer so we can use them, or include them in the functional model later.
  return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col)
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([0., 1., 1.], dtype=float32)>

通常,您不应将数字直接输入模型,而是改用这些输入的独热编码。考虑代表宠物年龄的原始数据。

type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
                                                      'int64', 5)
category_encoding_layer(type_col)
<tf.Tensor: shape=(5,), dtype=float32, numpy=array([1., 1., 0., 1., 1.], dtype=float32)>

选择要使用的列

您已经了解了如何使用多种类型的预处理层。现在您将使用它们来训练模型。您将使用 Keras-functional API 来构建模型。Keras 函数式 API 是一种比 tf.keras.Sequential API 更灵活的创建模型的方式。

本教程的目标是向您展示使用预处理层所需的完整代码(例如机制)。任意选择了几列来训练我们的模型。

要点:如果您的目标是构建一个准确的模型,请尝试使用自己的更大的数据集,并仔细考虑哪些特征最有意义,以及它们应该如何表示。

之前,您使用了小批次来演示输入流水线。现在让我们创建一个具有更大批次大小的新输入流水线。

batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []

# Numeric features.
for header in ['PhotoAmt', 'Fee']:
  numeric_col = tf.keras.Input(shape=(1,), name=header)
  normalization_layer = get_normalization_layer(header, train_ds)
  encoded_numeric_col = normalization_layer(numeric_col)
  all_inputs.append(numeric_col)
  encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
                                             max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
                    'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
  categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
  encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
                                               max_tokens=5)
  encoded_categorical_col = encoding_layer(categorical_col)
  all_inputs.append(categorical_col)
  encoded_features.append(encoded_categorical_col)

创建、编译并训练模型

接下来,您可以创建端到端模型。

all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=["accuracy"])

我们来可视化连接图:

# rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")

png

训练模型。

model.fit(train_ds, epochs=10, validation_data=val_ds)
Epoch 1/10
29/29 [==============================] - 2s 25ms/step - loss: 0.6462 - accuracy: 0.5074 - val_loss: 0.5785 - val_accuracy: 0.7172
Epoch 2/10
29/29 [==============================] - 0s 10ms/step - loss: 0.5870 - accuracy: 0.6706 - val_loss: 0.5536 - val_accuracy: 0.7313
Epoch 3/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5641 - accuracy: 0.6879 - val_loss: 0.5390 - val_accuracy: 0.7427
Epoch 4/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5513 - accuracy: 0.7047 - val_loss: 0.5283 - val_accuracy: 0.7508
Epoch 5/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5435 - accuracy: 0.7004 - val_loss: 0.5216 - val_accuracy: 0.7443
Epoch 6/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5370 - accuracy: 0.7093 - val_loss: 0.5165 - val_accuracy: 0.7459
Epoch 7/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5318 - accuracy: 0.7134 - val_loss: 0.5125 - val_accuracy: 0.7503
Epoch 8/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5284 - accuracy: 0.7139 - val_loss: 0.5095 - val_accuracy: 0.7486
Epoch 9/10
29/29 [==============================] - 0s 10ms/step - loss: 0.5275 - accuracy: 0.7187 - val_loss: 0.5072 - val_accuracy: 0.7492
Epoch 10/10
29/29 [==============================] - 0s 9ms/step - loss: 0.5258 - accuracy: 0.7200 - val_loss: 0.5047 - val_accuracy: 0.7519
<keras.callbacks.History at 0x7f00054d9f90>
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
10/10 [==============================] - 0s 6ms/step - loss: 0.5358 - accuracy: 0.7314
Accuracy 0.731369137763977

根据新数据进行推断

要点:您开发的模型现在可以直接从 CSV 文件中对行进行分类,因为预处理代码包含在模型本身中。

现在,您可以保存并重新加载 Keras 模型。请按照此处的教程了解有关 TensorFlow 模型的更多信息。

model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier')
2021-08-13 23:53:19.833727: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:absl:Function `_wrapped_model` contains input name(s) PhotoAmt, Fee, Age, Type, Color1, Color2, Gender, MaturitySize, FurLength, Vaccinated, Sterilized, Health, Breed1 with unsupported characters which will be renamed to photoamt, fee, age, type, color1, color2, gender, maturitysize, furlength, vaccinated, sterilized, health, breed1 in the SavedModel.
INFO:tensorflow:Assets written to: my_pet_classifier/assets
INFO:tensorflow:Assets written to: my_pet_classifier/assets

要获得对新样本的预测,只需调用 model.predict()。您只需要做两件事:

  1. 将标量封装成列表,以便具有批次维度(模型只处理成批次的数据,而不是单个样本)
  2. 对每个特征调用 convert_to_tensor
sample = {
    'Type': 'Cat',
    'Age': 3,
    'Breed1': 'Tabby',
    'Gender': 'Male',
    'Color1': 'Black',
    'Color2': 'White',
    'MaturitySize': 'Small',
    'FurLength': 'Short',
    'Vaccinated': 'No',
    'Sterilized': 'No',
    'Health': 'Healthy',
    'Fee': 100,
    'PhotoAmt': 2,
}

input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])

print(
    "This particular pet had a %.1f percent probability "
    "of getting adopted." % (100 * prob)
)
This particular pet had a 76.4 percent probability of getting adopted.

要点:对于更大、更复杂的数据集,您通常会看到深度学习的最佳结果。在处理像这样的小数据集时,我们建议使用决策树或随机森林作为强基线。本教程的目标是演示处理结构化数据的机制,以便您将来处理自己的数据集时有可以作为起点的代码。

后续步骤

进一步了解有关结构化数据分类的最佳方法是自己尝试。您可能希望找到另一个可使用的数据集,并使用与上述类似的代码训练模型对其进行分类。为了提高准确率,请仔细考虑要在模型中包含哪些特征,以及它们应该如何表示。