TensorFlow Extended'a (TFX) Bileşen Bileşen Giriş
TensorFlow.org'da görüntüleyin | Google Colab'da çalıştırın | Kaynağı GitHub'da görüntüleyin | Not defterini indir |
Bu Colab tabanlı öğretici, TensorFlow Extended'ın (TFX) her yerleşik bileşenini etkileşimli olarak gözden geçirecektir.
Veri alımından bir modeli sunmaya kadar, uçtan uca bir makine öğrenimi hattındaki her adımı kapsar.
İşiniz bittiğinde, bu not defterinin içeriği, Apache Airflow ve Apache Beam ile düzenleyebileceğiniz TFX ardışık düzen kaynak kodu olarak otomatik olarak dışa aktarılabilir.
Arka plan
Bu not defteri, TFX'in bir Jupyter/Colab ortamında nasıl kullanılacağını gösterir. Burada, interaktif bir not defterinde Chicago Taksi örneğini inceleyeceğiz.
Etkileşimli bir not defterinde çalışmak, bir TFX işlem hattının yapısına aşina olmanın yararlı bir yoludur. Hafif bir geliştirme ortamı olarak kendi işlem hatlarınızı geliştirirken de yararlıdır, ancak etkileşimli not defterlerinin düzenlenme biçiminde ve meta veri yapılarına nasıl eriştiklerinde farklılıklar olduğunu bilmelisiniz.
Orkestrasyon
TFX'in üretim dağıtımında, TFX bileşenlerinin önceden tanımlanmış bir işlem hattı grafiğini düzenlemek için Apache Airflow, Kubeflow Pipelines veya Apache Beam gibi bir düzenleyici kullanacaksınız. Etkileşimli bir not defterinde, not defterinin kendisi, siz not defteri hücrelerini yürütürken her TFX bileşenini çalıştıran düzenleyicidir.
meta veri
TFX'in üretim dağıtımında, ML Meta Verileri (MLMD) API'si aracılığıyla meta verilere erişeceksiniz. MLMD, meta veri özelliklerini MySQL veya SQLite gibi bir veritabanında saklar ve meta veri yüklerini dosya sisteminiz gibi kalıcı bir depoda saklar. İnteraktif bir notebook olarak, özellikleri ve yükleri hem de geçici bir SQLite veritabanında saklanır /tmp
Jupyter dizüstü veya CoLab sunucudaki dizinin.
Kurmak
Öncelikle gerekli paketleri kurup import ediyoruz, yolları ayarlıyoruz ve verileri indiriyoruz.
Pip'i Yükselt
Yerel olarak çalışırken bir sistemde Pip'i yükseltmekten kaçınmak için Colab'da çalıştığımızdan emin olun. Yerel sistemler elbette ayrı ayrı yükseltilebilir.
try:
import colab
!pip install --upgrade pip
except:
pass
TFX'i yükleyin
pip install -U tfx
Çalışma zamanını yeniden başlattınız mı?
Google Colab kullanıyorsanız, yukarıdaki hücreyi ilk kez çalıştırdığınızda, çalışma zamanını yeniden başlatmanız gerekir (Çalışma Zamanı > Çalışma zamanını yeniden başlat...). Bunun nedeni Colab'ın paketleri yükleme şeklidir.
Paketleri içe aktar
Standart TFX bileşen sınıfları dahil gerekli paketleri içe aktarıyoruz.
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
Kütüphane versiyonlarını kontrol edelim.
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
TensorFlow version: 2.7.0 TFX version: 1.5.0
İşlem hattı yollarını ayarlama
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
Örnek verileri indirin
Örnek veri setini TFX ardışık düzenimizde kullanmak üzere indiriyoruz.
Kullandığımız veri kümesi olan Taksi kümesi Gezileri Chicago City yayınladı. Bu veri kümesindeki sütunlar şunlardır:
pickup_community_area | Ücret | trip_start_month |
trip_start_hour | trip_start_day | trip_start_timestamp |
pickup_latitude | pickup_longitude | dropoff_latitude |
dropoff_longitude | trip_miles | pickup_census_tract |
dropoff_census_tract | ödeme şekli | şirket |
trip_saniye | dropoff_community_area | ipuçları |
Bu veri kümesi ile, tahmin eden bir model inşa edecek tips
bir gezi.
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
('/tmp/tfx-datacz9xjro6/data.csv', <http.client.HTTPMessage at 0x7f889af49250>)
CSV dosyasına hızlı bir göz atın.
head {_data_filepath}
pickup_community_area,fare,trip_start_month,trip_start_hour,trip_start_day,trip_start_timestamp,pickup_latitude,pickup_longitude,dropoff_latitude,dropoff_longitude,trip_miles,pickup_census_tract,dropoff_census_tract,payment_type,company,trip_seconds,dropoff_community_area,tips ,12.45,5,19,6,1400269500,,,,,0.0,,,Credit Card,Chicago Elite Cab Corp. (Chicago Carriag,0,,0.0 ,0,3,19,5,1362683700,,,,,0,,,Unknown,Chicago Elite Cab Corp.,300,,0 60,27.05,10,2,3,1380593700,41.836150155,-87.648787952,,,12.6,,,Cash,Taxi Affiliation Services,1380,,0.0 10,5.85,10,1,2,1382319000,41.985015101,-87.804532006,,,0.0,,,Cash,Taxi Affiliation Services,180,,0.0 14,16.65,5,7,5,1369897200,41.968069,-87.721559063,,,0.0,,,Cash,Dispatch Taxi Affiliation,1080,,0.0 13,16.45,11,12,3,1446554700,41.983636307,-87.723583185,,,6.9,,,Cash,,780,,0.0 16,32.05,12,1,1,1417916700,41.953582125,-87.72345239,,,15.4,,,Cash,,1200,,0.0 30,38.45,10,10,5,1444301100,41.839086906,-87.714003807,,,14.6,,,Cash,,2580,,0.0 11,14.65,1,1,3,1358213400,41.978829526,-87.771166703,,,5.81,,,Cash,,1080,,0.0
Sorumluluk Reddi: Bu site, Chicago şehrinin resmi web sitesi olan www.cityofchicago.org orijinal kaynağından kullanılmak üzere değiştirilmiş verileri kullanan uygulamalar sağlar. Chicago Şehri, bu sitede sağlanan verilerin içeriği, doğruluğu, güncelliği veya eksiksizliği konusunda hiçbir iddiada bulunmaz. Bu sitede sağlanan veriler herhangi bir zamanda değiştirilebilir. Bu sitede sağlanan verilerin riski kişinin kendisine ait olmak üzere kullanıldığı anlaşılmaktadır.
InteractiveContext'i oluşturun
Son olarak, bu not defterinde TFX bileşenlerini etkileşimli olarak çalıştırmamıza izin verecek bir InteractiveContext oluşturuyoruz.
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq as root for pipeline outputs. WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/metadata.sqlite.
TFX bileşenlerini etkileşimli olarak çalıştırın
Takip eden hücrelerde, TFX bileşenlerini tek tek oluşturuyor, her birini çalıştırıyor ve çıktı yapılarını görselleştiriyoruz.
ÖrnekGen
ExampleGen
bileşeni TFX boru hattının başlangıcında genellikle. O olacak:
- Verileri eğitim ve değerlendirme setlerine ayırın (varsayılan olarak 2/3 eğitim + 1/3 değerlendirme)
- İçine dönüştürme veri
tf.Example
formatında (daha fazla bilgi burada ) - Veri kopyalama
_tfx_root
erişimine diğer bileşenler için dizindeki
ExampleGen
veri kaynağına giriş olarak yol alır. Bizim durumumuzda, bu _data_root
indirilen CSV içeren yolu.
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen)
INFO:absl:Running driver for CsvExampleGen INFO:absl:MetadataStore with DB connection initialized INFO:absl:select span and version = (0, None) INFO:absl:latest span and version = (0, None) INFO:absl:Running executor for CsvExampleGen INFO:absl:Generating examples. WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features. INFO:absl:Processing input csv data /tmp/tfx-datacz9xjro6/* to TFExample. WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be. INFO:absl:Examples generated. INFO:absl:Running publisher for CsvExampleGen INFO:absl:MetadataStore with DB connection initialized
Diyelim çıkış eserler incelemek ExampleGen
. Bu bileşen, eğitim örnekleri ve değerlendirme örnekleri olmak üzere iki yapı üretir:
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
["train", "eval"] /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/CsvExampleGen/examples/1
İlk üç eğitim örneğine de bakabiliriz:
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
features { feature { key: "company" value { bytes_list { value: "Chicago Elite Cab Corp. (Chicago Carriag" } } } feature { key: "dropoff_census_tract" value { int64_list { } } } feature { key: "dropoff_community_area" value { int64_list { } } } feature { key: "dropoff_latitude" value { float_list { } } } feature { key: "dropoff_longitude" value { float_list { } } } feature { key: "fare" value { float_list { value: 12.449999809265137 } } } feature { key: "payment_type" value { bytes_list { value: "Credit Card" } } } feature { key: "pickup_census_tract" value { int64_list { } } } feature { key: "pickup_community_area" value { int64_list { } } } feature { key: "pickup_latitude" value { float_list { } } } feature { key: "pickup_longitude" value { float_list { } } } feature { key: "tips" value { float_list { value: 0.0 } } } feature { key: "trip_miles" value { float_list { value: 0.0 } } } feature { key: "trip_seconds" value { int64_list { value: 0 } } } feature { key: "trip_start_day" value { int64_list { value: 6 } } } feature { key: "trip_start_hour" value { int64_list { value: 19 } } } feature { key: "trip_start_month" value { int64_list { value: 5 } } } feature { key: "trip_start_timestamp" value { int64_list { value: 1400269500 } } } } features { feature { key: "company" value { bytes_list { value: "Taxi Affiliation Services" } } } feature { key: "dropoff_census_tract" value { int64_list { } } } feature { key: "dropoff_community_area" value { int64_list { } } } feature { key: "dropoff_latitude" value { float_list { } } } feature { key: "dropoff_longitude" value { float_list { } } } feature { key: "fare" value { float_list { value: 27.049999237060547 } } } feature { key: "payment_type" value { bytes_list { value: "Cash" } } } feature { key: "pickup_census_tract" value { int64_list { } } } feature { key: "pickup_community_area" value { int64_list { value: 60 } } } feature { key: "pickup_latitude" value { float_list { value: 41.836151123046875 } } } feature { key: "pickup_longitude" value { float_list { value: -87.64878845214844 } } } feature { key: "tips" value { float_list { value: 0.0 } } } feature { key: "trip_miles" value { float_list { value: 12.600000381469727 } } } feature { key: "trip_seconds" value { int64_list { value: 1380 } } } feature { key: "trip_start_day" value { int64_list { value: 3 } } } feature { key: "trip_start_hour" value { int64_list { value: 2 } } } feature { key: "trip_start_month" value { int64_list { value: 10 } } } feature { key: "trip_start_timestamp" value { int64_list { value: 1380593700 } } } } features { feature { key: "company" value { bytes_list { } } } feature { key: "dropoff_census_tract" value { int64_list { } } } feature { key: "dropoff_community_area" value { int64_list { } } } feature { key: "dropoff_latitude" value { float_list { } } } feature { key: "dropoff_longitude" value { float_list { } } } feature { key: "fare" value { float_list { value: 16.450000762939453 } } } feature { key: "payment_type" value { bytes_list { value: "Cash" } } } feature { key: "pickup_census_tract" value { int64_list { } } } feature { key: "pickup_community_area" value { int64_list { value: 13 } } } feature { key: "pickup_latitude" value { float_list { value: 41.98363494873047 } } } feature { key: "pickup_longitude" value { float_list { value: -87.72357940673828 } } } feature { key: "tips" value { float_list { value: 0.0 } } } feature { key: "trip_miles" value { float_list { value: 6.900000095367432 } } } feature { key: "trip_seconds" value { int64_list { value: 780 } } } feature { key: "trip_start_day" value { int64_list { value: 3 } } } feature { key: "trip_start_hour" value { int64_list { value: 12 } } } feature { key: "trip_start_month" value { int64_list { value: 11 } } } feature { key: "trip_start_timestamp" value { int64_list { value: 1446554700 } } } }
Şimdi bu ExampleGen
verilerini sindirerek bittikten sonraki adım veri analizidir.
İstatistikGen
StatisticsGen
mansap bileşenleri veri analizi için veri kümesi, hem de için kullanımı üzerinde bileşen değerlerini hesaplar istatistikleri. Kullandığı TensorFlow Veri Doğrulama kütüphane.
StatisticsGen
girdi olarak biz sadece kullanılarak yutulur veri kümesi alır ExampleGen
.
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen)
INFO:absl:Excluding no splits because exclude_splits is not set. INFO:absl:Running driver for StatisticsGen INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for StatisticsGen INFO:absl:Generating statistics for split train. INFO:absl:Statistics for split train written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/StatisticsGen/statistics/2/Split-train. INFO:absl:Generating statistics for split eval. INFO:absl:Statistics for split eval written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/StatisticsGen/statistics/2/Split-eval. WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. INFO:absl:Running publisher for StatisticsGen INFO:absl:MetadataStore with DB connection initialized
Sonra StatisticsGen
çalıştırılması tamamlandıktan biz outputted istatistiklerini görüntüleyebilir. Farklı arsalarla oynamayı deneyin!
context.show(statistics_gen.outputs['statistics'])
ŞemaGen
SchemaGen
bileşeni veri istatistiklere dayalı bir şema oluşturur. (Bir şema beklenen sınırları, türlerini ve veri kümesi özelliklerin özelliklerini tanımlar.) Aynı zamanda kullandığı TensorFlow Veri Doğrulama kütüphane.
SchemaGen
biz oluşturduğu istatistik girdi olarak alacak StatisticsGen
varsayılan olarak eğitim bölünmüş bakarak.
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen)
INFO:absl:Excluding no splits because exclude_splits is not set. INFO:absl:Running driver for SchemaGen INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for SchemaGen INFO:absl:Processing schema from statistics for split train. INFO:absl:Processing schema from statistics for split eval. INFO:absl:Schema written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/SchemaGen/schema/3/schema.pbtxt. INFO:absl:Running publisher for SchemaGen INFO:absl:MetadataStore with DB connection initialized
Sonra SchemaGen
çalışmasını bitirdiğinde, bir tablo olarak oluşturulan şema benzetebiliriz.
context.show(schema_gen.outputs['schema'])
Veri kümenizdeki her özellik, özelliklerinin yanında şema tablosunda bir satır olarak görünür. Şema ayrıca kategorik bir özelliğin üstlendiği, etki alanı olarak gösterilen tüm değerleri de yakalar.
Şemaları hakkında daha fazla bilgi edinmek için bkz SchemaGen belgelerine .
Örnek Doğrulayıcı
ExampleValidator
bileşen şeması tarafından tanımlanan beklentilere dayalı, sizin verilerdeki anormallikleri tespit eder. Ayrıca kullandığı TensorFlow Veri Doğrulama kütüphane.
ExampleValidator
gelen girdi olarak istatistik alacak StatisticsGen
ve gelen şema SchemaGen
.
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator)
INFO:absl:Excluding no splits because exclude_splits is not set. INFO:absl:Running driver for ExampleValidator INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for ExampleValidator INFO:absl:Validating schema against the computed statistics for split train. INFO:absl:Validation complete for split train. Anomalies written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/ExampleValidator/anomalies/4/Split-train. INFO:absl:Validating schema against the computed statistics for split eval. INFO:absl:Validation complete for split eval. Anomalies written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/ExampleValidator/anomalies/4/Split-eval. INFO:absl:Running publisher for ExampleValidator INFO:absl:MetadataStore with DB connection initialized
Sonra ExampleValidator
çalışmasını bitirdiğinde, bir tablo olarak anormallikleri oluşturulabiliyor.
context.show(example_validator.outputs['anomalies'])
Anomaliler tablosunda herhangi bir anormallik olmadığını görebiliriz. Bu, analiz ettiğimiz ilk veri seti ve şema buna göre ayarlandığından, beklediğimiz şey bu. Bu şemayı gözden geçirmelisiniz -- beklenmeyen herhangi bir şey, verilerde bir anormallik olduğu anlamına gelir. Şema gözden geçirildikten sonra, gelecekteki verileri korumak için kullanılabilir ve burada üretilen anormallikler, model performansında hata ayıklamak, verilerinizin zaman içinde nasıl geliştiğini anlamak ve veri hatalarını belirlemek için kullanılabilir.
dönüştürmek
Transform
eğitim ve hizmet veren her ikisi için bileşen gerçekleştirdiği özellik mühendislik. Kullandığı TensorFlow Transform kütüphanesi.
Transform
girdi olarak veri alır ExampleGen
, şemadan SchemaGen
, hem de kullanıcı tarafından belirlenen bir kod dönüşümü içeren bir modül.
Diyelim bir örnek görmek kullanıcı tanımlı (TensorFlow bir giriş için API'leri Transform aşağıdaki kodu Transform öğretici bakınız ). İlk olarak, özellik mühendisliği için birkaç sabit tanımlıyoruz:
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]
CATEGORICAL_FEATURE_KEYS = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = [
'payment_type',
'company',
]
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
Writing taxi_constants.py
Sonra, bir yazma preprocessing_fn
girdi olarak ham verilerde alır ve döner modelimiz üzerinde yetiştirmesini dönüştürülmüş özellikleri:
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
def preprocessing_fn(inputs):
"""tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
"""
outputs = {}
for key in _DENSE_FLOAT_FEATURE_KEYS:
# If sparse make it dense, setting nan's to 0 or '', and apply zscore.
outputs[key] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in _VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[key] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE)
for key in _BUCKET_FEATURE_KEYS:
outputs[key] = tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT)
for key in _CATEGORICAL_FEATURE_KEYS:
outputs[key] = _fill_in_missing(inputs[key])
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_LABEL_KEY] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
def _fill_in_missing(x):
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
if not isinstance(x, tf.sparse.SparseTensor):
return x
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
Writing taxi_transform.py
Şimdi, bu özellik mühendislik kodunda geçmesi Transform
bileşeni ve verileri dönüştürmek için çalıştırın.
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform)
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_transform.py' (including modules: ['taxi_transform', 'taxi_constants']). INFO:absl:User module package has hash fingerprint version f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmp9qnpryw9/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmppaskl3va', '--dist-dir', '/tmp/tmpr6oorqji'] /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, INFO:absl:Successfully built user code wheel distribution at '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'; target user module is 'taxi_transform'. INFO:absl:Full user module path is 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' INFO:absl:Running driver for Transform INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for Transform INFO:absl:Analyze the 'train' split and transform all splits when splits_config is not set. INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'preprocessing_fn': None} 'preprocessing_fn' INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmpbvbj9r5b', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'] running bdist_wheel running build running build_py creating build creating build/lib copying taxi_transform.py -> build/lib copying taxi_constants.py -> build/lib running install running install_lib running install_egg_info running egg_info creating tfx_user_code_Transform.egg-info writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt' writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt' Copying tfx_user_code_Transform.egg-info to /tmp/tmppaskl3va/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3.7.egg-info running install_scripts Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'. INFO:absl:udf_utils.get_fn {'module_file': None, 'module_path': 'taxi_transform@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl', 'stats_options_updater_fn': None} 'stats_options_updater_fn' INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmpbzwdie1a', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'] Installing collected packages: tfx-user-code-Transform Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424 Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'. INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl' to a temporary directory. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmp09euava5', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'] Installing collected packages: tfx-user-code-Transform Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424 Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424-py3-none-any.whl'. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. Installing collected packages: tfx-user-code-Transform Successfully installed tfx-user-code-Transform-0.0+f78e5f6b4988b5d5289aab277eceaff03bd38343154c2f602e06d95c6acd5424 INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_transform/tf_utils.py:289: Tensor.experimental_ref (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use ref() instead. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2 WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2 INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2 WARNING:absl:Tables initialized inside a tf.function will be re-initialized on every invocation of the function. This re-initialization can have significant impact on performance. Consider lifting them out of the graph context using `tf.init_scope`.: compute_and_apply_vocabulary_1/apply_vocab/text_file_init/InitializeTableFromTextFileV2 WARNING:root:This output type hint will be ignored and not used for type-checking purposes. Typically, output type hints for a PTransform are single (or nested) types wrapped by a PCollection, PDone, or None. Got: Tuple[Dict[str, Union[NoneType, _Dataset]], Union[Dict[str, Dict[str, PCollection]], NoneType], int] instead. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature company has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature dropoff_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature fare has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature payment_type has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_census_tract has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_community_area has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_latitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature pickup_longitude has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature tips has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_miles has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_seconds has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_day has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_hour has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_month has no shape. Setting to VarLenSparseTensor. INFO:absl:Feature trip_start_timestamp has no shape. Setting to VarLenSparseTensor. WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. 2021-12-21 10:10:18.679569: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5/.temp_path/tftransform_tmp/80dbc09e6ded4a93b5c506e252c8f536/assets INFO:tensorflow:tensorflow_text is not available. INFO:tensorflow:tensorflow_decision_forests is not available. INFO:tensorflow:struct2tensor is not available. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5/.temp_path/tftransform_tmp/572eacb7c64f4f6e9262f7d496a95f86/assets INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:absl:If the number of unique tokens is smaller than the provided top_k or approximation error is acceptable, consider using tft.experimental.approximate_vocabulary for a potentially more efficient implementation. INFO:tensorflow:tensorflow_text is not available. INFO:tensorflow:tensorflow_decision_forests is not available. INFO:tensorflow:struct2tensor is not available. INFO:tensorflow:tensorflow_text is not available. INFO:tensorflow:tensorflow_decision_forests is not available. INFO:tensorflow:struct2tensor is not available. INFO:absl:Running publisher for Transform INFO:absl:MetadataStore with DB connection initialized
En çıktısı eserler inceleyelim Transform
. Bu bileşen iki tür çıktı üretir:
-
transform_graph
(Bu grafik sunumu ve değerlendirme modelleri dahil edilecektir) ön işleme işlemleri gerçekleştirebilir grafiktir. -
transformed_examples
ön işlenmiş eğitim ve değerlendirme verilerini temsil eder.
transform.outputs
{'transform_graph': Channel( type_name: TransformGraph artifacts: [Artifact(artifact: id: 5 type_id: 22 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transform_graph/5" custom_properties { key: "name" value { string_value: "transform_graph" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 22 name: "TransformGraph" )] additional_properties: {} additional_custom_properties: {} ), 'transformed_examples': Channel( type_name: Examples artifacts: [Artifact(artifact: id: 6 type_id: 14 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/transformed_examples/5" properties { key: "split_names" value { string_value: "[\"train\", \"eval\"]" } } custom_properties { key: "name" value { string_value: "transformed_examples" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 14 name: "Examples" properties { key: "span" value: INT } properties { key: "split_names" value: STRING } properties { key: "version" value: INT } base_type: DATASET )] additional_properties: {} additional_custom_properties: {} ), 'updated_analyzer_cache': Channel( type_name: TransformCache artifacts: [Artifact(artifact: id: 7 type_id: 23 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/updated_analyzer_cache/5" custom_properties { key: "name" value { string_value: "updated_analyzer_cache" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 23 name: "TransformCache" )] additional_properties: {} additional_custom_properties: {} ), 'pre_transform_schema': Channel( type_name: Schema artifacts: [Artifact(artifact: id: 8 type_id: 18 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/pre_transform_schema/5" custom_properties { key: "name" value { string_value: "pre_transform_schema" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 18 name: "Schema" )] additional_properties: {} additional_custom_properties: {} ), 'pre_transform_stats': Channel( type_name: ExampleStatistics artifacts: [Artifact(artifact: id: 9 type_id: 16 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/pre_transform_stats/5" custom_properties { key: "name" value { string_value: "pre_transform_stats" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 16 name: "ExampleStatistics" properties { key: "span" value: INT } properties { key: "split_names" value: STRING } base_type: STATISTICS )] additional_properties: {} additional_custom_properties: {} ), 'post_transform_schema': Channel( type_name: Schema artifacts: [Artifact(artifact: id: 10 type_id: 18 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_schema/5" custom_properties { key: "name" value { string_value: "post_transform_schema" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 18 name: "Schema" )] additional_properties: {} additional_custom_properties: {} ), 'post_transform_stats': Channel( type_name: ExampleStatistics artifacts: [Artifact(artifact: id: 11 type_id: 16 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_stats/5" custom_properties { key: "name" value { string_value: "post_transform_stats" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 16 name: "ExampleStatistics" properties { key: "span" value: INT } properties { key: "split_names" value: STRING } base_type: STATISTICS )] additional_properties: {} additional_custom_properties: {} ), 'post_transform_anomalies': Channel( type_name: ExampleAnomalies artifacts: [Artifact(artifact: id: 12 type_id: 20 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Transform/post_transform_anomalies/5" custom_properties { key: "name" value { string_value: "post_transform_anomalies" } } custom_properties { key: "producer_component" value { string_value: "Transform" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 20 name: "ExampleAnomalies" properties { key: "span" value: INT } properties { key: "split_names" value: STRING } )] additional_properties: {} additional_custom_properties: {} )}
Bir göz atın transform_graph
dışlayıcı. Üç alt dizini içeren bir dizine işaret eder.
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
['transform_fn', 'transformed_metadata', 'metadata']
transformed_metadata
alt dizin ön işlenen verilerin şemayı içeriyor. transform_fn
alt gerçek ön işleme grafik bulunmaktadır. metadata
alt dizin orijinal verilerin şemasını içeriyor.
Dönüştürülen ilk üç örneğe de bakabiliriz:
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
features { feature { key: "company" value { int64_list { value: 8 } } } feature { key: "dropoff_census_tract" value { int64_list { value: 0 } } } feature { key: "dropoff_community_area" value { int64_list { value: 0 } } } feature { key: "dropoff_latitude" value { int64_list { value: 0 } } } feature { key: "dropoff_longitude" value { int64_list { value: 9 } } } feature { key: "fare" value { float_list { value: 0.061060599982738495 } } } feature { key: "payment_type" value { int64_list { value: 1 } } } feature { key: "pickup_census_tract" value { int64_list { value: 0 } } } feature { key: "pickup_community_area" value { int64_list { value: 0 } } } feature { key: "pickup_latitude" value { int64_list { value: 0 } } } feature { key: "pickup_longitude" value { int64_list { value: 9 } } } feature { key: "tips" value { int64_list { value: 0 } } } feature { key: "trip_miles" value { float_list { value: -0.15886741876602173 } } } feature { key: "trip_seconds" value { float_list { value: -0.7118487358093262 } } } feature { key: "trip_start_day" value { int64_list { value: 6 } } } feature { key: "trip_start_hour" value { int64_list { value: 19 } } } feature { key: "trip_start_month" value { int64_list { value: 5 } } } } features { feature { key: "company" value { int64_list { value: 0 } } } feature { key: "dropoff_census_tract" value { int64_list { value: 0 } } } feature { key: "dropoff_community_area" value { int64_list { value: 0 } } } feature { key: "dropoff_latitude" value { int64_list { value: 0 } } } feature { key: "dropoff_longitude" value { int64_list { value: 9 } } } feature { key: "fare" value { float_list { value: 1.2521240711212158 } } } feature { key: "payment_type" value { int64_list { value: 0 } } } feature { key: "pickup_census_tract" value { int64_list { value: 0 } } } feature { key: "pickup_community_area" value { int64_list { value: 60 } } } feature { key: "pickup_latitude" value { int64_list { value: 0 } } } feature { key: "pickup_longitude" value { int64_list { value: 3 } } } feature { key: "tips" value { int64_list { value: 0 } } } feature { key: "trip_miles" value { float_list { value: 0.532160758972168 } } } feature { key: "trip_seconds" value { float_list { value: 0.5509493350982666 } } } feature { key: "trip_start_day" value { int64_list { value: 3 } } } feature { key: "trip_start_hour" value { int64_list { value: 2 } } } feature { key: "trip_start_month" value { int64_list { value: 10 } } } } features { feature { key: "company" value { int64_list { value: 48 } } } feature { key: "dropoff_census_tract" value { int64_list { value: 0 } } } feature { key: "dropoff_community_area" value { int64_list { value: 0 } } } feature { key: "dropoff_latitude" value { int64_list { value: 0 } } } feature { key: "dropoff_longitude" value { int64_list { value: 9 } } } feature { key: "fare" value { float_list { value: 0.3873794376850128 } } } feature { key: "payment_type" value { int64_list { value: 0 } } } feature { key: "pickup_census_tract" value { int64_list { value: 0 } } } feature { key: "pickup_community_area" value { int64_list { value: 13 } } } feature { key: "pickup_latitude" value { int64_list { value: 9 } } } feature { key: "pickup_longitude" value { int64_list { value: 0 } } } feature { key: "tips" value { int64_list { value: 0 } } } feature { key: "trip_miles" value { float_list { value: 0.21955277025699615 } } } feature { key: "trip_seconds" value { float_list { value: 0.0019067146349698305 } } } feature { key: "trip_start_day" value { int64_list { value: 3 } } } feature { key: "trip_start_hour" value { int64_list { value: 12 } } } feature { key: "trip_start_month" value { int64_list { value: 11 } } } }
Sonra Transform
bileşeni özellikleri verilerinizi dönüştürdü ve gelmiştir sonraki adım bir model eğitmek.
Eğitimci
Trainer
bileşeni TensorFlow içinde tanımladığınız bir model eğitim verecek. Keras API kullanmayı, Eğitmen desteği Tahmincisi API Standart, belirtmeniz gereken Jenerik Trainer kurulumu ile custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor)
Eğitici işinin müteahhidi içinde.
Trainer
girdi olarak şemadan alır SchemaGen
, dönüştürülmüş veriler ve grafik Transform
parametreleri, yanı sıra kullanıcı tanımlı model kodunu içeren bir modül, eğitim.
Diyelim (TensorFlow Keras API'leri bir giriş için aşağıdaki kullanıcı tanımlı modeli kodunun bir örneğini görmek öğretici bakınız ):
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
from typing import List, Text
import os
from absl import logging
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
import taxi_constants
_DENSE_FLOAT_FEATURE_KEYS = taxi_constants.DENSE_FLOAT_FEATURE_KEYS
_VOCAB_FEATURE_KEYS = taxi_constants.VOCAB_FEATURE_KEYS
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_BUCKET_FEATURE_KEYS = taxi_constants.BUCKET_FEATURE_KEYS
_CATEGORICAL_FEATURE_KEYS = taxi_constants.CATEGORICAL_FEATURE_KEYS
_MAX_CATEGORICAL_FEATURE_VALUES = taxi_constants.MAX_CATEGORICAL_FEATURE_VALUES
_LABEL_KEY = taxi_constants.LABEL_KEY
def _get_tf_examples_serving_signature(model, tf_transform_output):
"""Returns a serving signature that accepts `tensorflow.Example`."""
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_inference = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_example):
"""Returns the output to be used in the serving signature."""
raw_feature_spec = tf_transform_output.raw_feature_spec()
# Remove label feature since these will not be present at serving time.
raw_feature_spec.pop(_LABEL_KEY)
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_inference(raw_features)
logging.info('serve_transformed_features = %s', transformed_features)
outputs = model(transformed_features)
# TODO(b/154085620): Convert the predicted labels from the model using a
# reverse-lookup (opposite of transform.py).
return {'outputs': outputs}
return serve_tf_examples_fn
def _get_transform_features_signature(model, tf_transform_output):
"""Returns a serving signature that applies tf.Transform to features."""
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_eval = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def transform_features_fn(serialized_tf_example):
"""Returns the transformed_features to be fed as input to evaluator."""
raw_feature_spec = tf_transform_output.raw_feature_spec()
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_eval(raw_features)
logging.info('eval_transformed_features = %s', transformed_features)
return transformed_features
return transform_features_fn
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
tf_transform_output.transformed_metadata.schema)
def _build_keras_model(hidden_units: List[int] = None) -> tf.keras.Model:
"""Creates a DNN Keras model for classifying taxi data.
Args:
hidden_units: [int], the layer sizes of the DNN (input layer first).
Returns:
A keras Model.
"""
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in _DENSE_FLOAT_FEATURE_KEYS
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_VOCAB_SIZE + _OOV_SIZE, default_value=0)
for key in _VOCAB_FEATURE_KEYS
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=_FEATURE_BUCKET_COUNT, default_value=0)
for key in _BUCKET_FEATURE_KEYS
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity( # pylint: disable=g-complex-comprehension
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
_CATEGORICAL_FEATURE_KEYS,
_MAX_CATEGORICAL_FEATURE_VALUES)
]
indicator_column = [
tf.feature_column.indicator_column(categorical_column)
for categorical_column in categorical_columns
]
model = _wide_and_deep_classifier(
# TODO(b/139668410) replace with premade wide_and_deep keras model
wide_columns=indicator_column,
deep_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25])
return model
def _wide_and_deep_classifier(wide_columns, deep_columns, dnn_hidden_units):
"""Build a simple keras wide and deep model.
Args:
wide_columns: Feature columns wrapped in indicator_column for wide (linear)
part of the model.
deep_columns: Feature columns for deep part of the model.
dnn_hidden_units: [int], the layer sizes of the hidden DNN.
Returns:
A Wide and Deep Keras model
"""
# Following values are hard coded for simplicity in this example,
# However prefarably they should be passsed in as hparams.
# Keras needs the feature definitions at compile time.
# TODO(b/139081439): Automate generation of input layers from FeatureColumn.
input_layers = {
colname: tf.keras.layers.Input(name=colname, shape=(), dtype=tf.float32)
for colname in _DENSE_FLOAT_FEATURE_KEYS
}
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _VOCAB_FEATURE_KEYS
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _BUCKET_FEATURE_KEYS
})
input_layers.update({
colname: tf.keras.layers.Input(name=colname, shape=(), dtype='int32')
for colname in _CATEGORICAL_FEATURE_KEYS
})
# TODO(b/161952382): Replace with Keras preprocessing layers.
deep = tf.keras.layers.DenseFeatures(deep_columns)(input_layers)
for numnodes in dnn_hidden_units:
deep = tf.keras.layers.Dense(numnodes)(deep)
wide = tf.keras.layers.DenseFeatures(wide_columns)(input_layers)
output = tf.keras.layers.Dense(1)(
tf.keras.layers.concatenate([deep, wide]))
model = tf.keras.Model(input_layers, output)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(lr=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, 40)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, 40)
model = _build_keras_model(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
])
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
signatures = {
'serving_default':
_get_tf_examples_serving_signature(model, tf_transform_output),
'transform_features':
_get_transform_features_signature(model, tf_transform_output),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
Writing taxi_trainer.py
Şimdi, bu modelin kodunda geçmesi Trainer
bileşeni ve model eğitmek için çalıştırın.
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10000),
eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer)
INFO:absl:Generating ephemeral wheel package for '/tmpfs/src/temp/docs/tutorials/tfx/taxi_trainer.py' (including modules: ['taxi_transform', 'taxi_constants', 'taxi_trainer']). INFO:absl:User module package has hash fingerprint version ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '/tmp/tmpzxd5b1yc/_tfx_generated_setup.py', 'bdist_wheel', '--bdist-dir', '/tmp/tmpbg9ly6tr', '--dist-dir', '/tmp/tmpx43qh690'] /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, INFO:absl:Successfully built user code wheel distribution at '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'; target user module is 'taxi_trainer'. INFO:absl:Full user module path is 'taxi_trainer@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl' INFO:absl:Running driver for Trainer INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for Trainer INFO:absl:Train on the 'train' split when train_args.splits is not set. INFO:absl:Evaluate on the 'eval' split when eval_args.splits is not set. WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE INFO:absl:udf_utils.get_fn {'train_args': '{\n "num_steps": 10000\n}', 'eval_args': '{\n "num_steps": 5000\n}', 'module_file': None, 'run_fn': None, 'trainer_fn': None, 'custom_config': 'null', 'module_path': 'taxi_trainer@/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'} 'run_fn' INFO:absl:Installing '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl' to a temporary directory. INFO:absl:Executing: ['/tmpfs/src/tf_docs_env/bin/python', '-m', 'pip', 'install', '--target', '/tmp/tmp1osq6e1x', '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'] running bdist_wheel running build running build_py creating build creating build/lib copying taxi_transform.py -> build/lib copying taxi_constants.py -> build/lib copying taxi_trainer.py -> build/lib running install running install_lib running install_egg_info running egg_info creating tfx_user_code_Trainer.egg-info writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt' writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt' Copying tfx_user_code_Trainer.egg-info to /tmp/tmpbg9ly6tr/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3.7.egg-info running install_scripts Processing /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl INFO:absl:Successfully installed '/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/_wheels/tfx_user_code_Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af-py3-none-any.whl'. INFO:absl:Training model. INFO:absl:Feature company has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature fare has a shape . Setting to DenseTensor. INFO:absl:Feature payment_type has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature tips has a shape . Setting to DenseTensor. INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor. INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor. Installing collected packages: tfx-user-code-Trainer Successfully installed tfx-user-code-Trainer-0.0+ace8eb563ff2ae66112acc05232b33344bcb925cdc0a0847df64c544323b99af INFO:absl:Feature company has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature fare has a shape . Setting to DenseTensor. INFO:absl:Feature payment_type has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature tips has a shape . Setting to DenseTensor. INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor. INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor. INFO:absl:Feature company has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature fare has a shape . Setting to DenseTensor. INFO:absl:Feature payment_type has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature tips has a shape . Setting to DenseTensor. INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor. INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor. INFO:absl:Feature company has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature dropoff_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature fare has a shape . Setting to DenseTensor. INFO:absl:Feature payment_type has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_census_tract has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_community_area has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_latitude has a shape . Setting to DenseTensor. INFO:absl:Feature pickup_longitude has a shape . Setting to DenseTensor. INFO:absl:Feature tips has a shape . Setting to DenseTensor. INFO:absl:Feature trip_miles has a shape . Setting to DenseTensor. INFO:absl:Feature trip_seconds has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_day has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_hour has a shape . Setting to DenseTensor. INFO:absl:Feature trip_start_month has a shape . Setting to DenseTensor. /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. super(Adam, self).__init__(name, **kwargs) INFO:absl:Model: "model" INFO:absl:__________________________________________________________________________________________________ INFO:absl: Layer (type) Output Shape Param # Connected to INFO:absl:================================================================================================== INFO:absl: company (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: dropoff_census_tract (InputLay [(None,)] 0 [] INFO:absl: er) INFO:absl: INFO:absl: dropoff_community_area (InputL [(None,)] 0 [] INFO:absl: ayer) INFO:absl: INFO:absl: dropoff_latitude (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: dropoff_longitude (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: fare (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: payment_type (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: pickup_census_tract (InputLaye [(None,)] 0 [] INFO:absl: r) INFO:absl: INFO:absl: pickup_community_area (InputLa [(None,)] 0 [] INFO:absl: yer) INFO:absl: INFO:absl: pickup_latitude (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: pickup_longitude (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: trip_miles (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: trip_seconds (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: trip_start_day (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: trip_start_hour (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: trip_start_month (InputLayer) [(None,)] 0 [] INFO:absl: INFO:absl: dense_features (DenseFeatures) (None, 3) 0 ['company[0][0]', INFO:absl: 'dropoff_census_tract[0][0]', INFO:absl: 'dropoff_community_area[0][0]', INFO:absl: 'dropoff_latitude[0][0]', INFO:absl: 'dropoff_longitude[0][0]', INFO:absl: 'fare[0][0]', INFO:absl: 'payment_type[0][0]', INFO:absl: 'pickup_census_tract[0][0]', INFO:absl: 'pickup_community_area[0][0]', INFO:absl: 'pickup_latitude[0][0]', INFO:absl: 'pickup_longitude[0][0]', INFO:absl: 'trip_miles[0][0]', INFO:absl: 'trip_seconds[0][0]', INFO:absl: 'trip_start_day[0][0]', INFO:absl: 'trip_start_hour[0][0]', INFO:absl: 'trip_start_month[0][0]'] INFO:absl: INFO:absl: dense (Dense) (None, 100) 400 ['dense_features[0][0]'] INFO:absl: INFO:absl: dense_1 (Dense) (None, 70) 7070 ['dense[0][0]'] INFO:absl: INFO:absl: dense_2 (Dense) (None, 48) 3408 ['dense_1[0][0]'] INFO:absl: INFO:absl: dense_3 (Dense) (None, 34) 1666 ['dense_2[0][0]'] INFO:absl: INFO:absl: dense_features_1 (DenseFeature (None, 2127) 0 ['company[0][0]', INFO:absl: s) 'dropoff_census_tract[0][0]', INFO:absl: 'dropoff_community_area[0][0]', INFO:absl: 'dropoff_latitude[0][0]', INFO:absl: 'dropoff_longitude[0][0]', INFO:absl: 'fare[0][0]', INFO:absl: 'payment_type[0][0]', INFO:absl: 'pickup_census_tract[0][0]', INFO:absl: 'pickup_community_area[0][0]', INFO:absl: 'pickup_latitude[0][0]', INFO:absl: 'pickup_longitude[0][0]', INFO:absl: 'trip_miles[0][0]', INFO:absl: 'trip_seconds[0][0]', INFO:absl: 'trip_start_day[0][0]', INFO:absl: 'trip_start_hour[0][0]', INFO:absl: 'trip_start_month[0][0]'] INFO:absl: INFO:absl: concatenate (Concatenate) (None, 2161) 0 ['dense_3[0][0]', INFO:absl: 'dense_features_1[0][0]'] INFO:absl: INFO:absl: dense_4 (Dense) (None, 1) 2162 ['concatenate[0][0]'] INFO:absl: INFO:absl:================================================================================================== INFO:absl:Total params: 14,706 INFO:absl:Trainable params: 14,706 INFO:absl:Non-trainable params: 0 INFO:absl:__________________________________________________________________________________________________ 10000/10000 [==============================] - 100s 10ms/step - loss: 0.2372 - binary_accuracy: 0.8605 - val_loss: 0.2222 - val_binary_accuracy: 0.8709 INFO:tensorflow:tensorflow_text is not available. INFO:tensorflow:tensorflow_decision_forests is not available. INFO:tensorflow:struct2tensor is not available. WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.Socket(zmq.PUSH) at 0x7f88b5e27910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.Socket(zmq.PUSH) at 0x7f88b5e27910>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert INFO:absl:serve_transformed_features = {'pickup_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:9' shape=(None,) dtype=int64>, 'trip_start_hour': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:15' shape=(None,) dtype=int64>, 'fare': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:5' shape=(None,) dtype=float32>, 'trip_miles': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:12' shape=(None,) dtype=float32>, 'trip_start_day': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:14' shape=(None,) dtype=int64>, 'dropoff_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:3' shape=(None,) dtype=int64>, 'trip_start_month': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:16' shape=(None,) dtype=int64>, 'dropoff_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:2' shape=(None,) dtype=int64>, 'dropoff_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:4' shape=(None,) dtype=int64>, 'payment_type': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:6' shape=(None,) dtype=int64>, 'pickup_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:10' shape=(None,) dtype=int64>, 'pickup_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:8' shape=(None,) dtype=int64>, 'company': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:0' shape=(None,) dtype=int64>, 'pickup_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:7' shape=(None,) dtype=int64>, 'dropoff_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:1' shape=(None,) dtype=int64>, 'trip_seconds': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:13' shape=(None,) dtype=float32>} INFO:absl:eval_transformed_features = {'pickup_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:9' shape=(None,) dtype=int64>, 'trip_start_hour': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:15' shape=(None,) dtype=int64>, 'fare': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:5' shape=(None,) dtype=float32>, 'trip_miles': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:12' shape=(None,) dtype=float32>, 'trip_start_day': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:14' shape=(None,) dtype=int64>, 'dropoff_latitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:3' shape=(None,) dtype=int64>, 'trip_start_month': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:16' shape=(None,) dtype=int64>, 'dropoff_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:2' shape=(None,) dtype=int64>, 'dropoff_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:4' shape=(None,) dtype=int64>, 'payment_type': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:6' shape=(None,) dtype=int64>, 'pickup_longitude': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:10' shape=(None,) dtype=int64>, 'pickup_community_area': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:8' shape=(None,) dtype=int64>, 'company': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:0' shape=(None,) dtype=int64>, 'pickup_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:7' shape=(None,) dtype=int64>, 'tips': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:11' shape=(None,) dtype=int64>, 'dropoff_census_tract': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:1' shape=(None,) dtype=int64>, 'trip_seconds': <tf.Tensor 'transform_features_layer/StatefulPartitionedCall:13' shape=(None,) dtype=float32>} INFO:tensorflow:Assets written to: /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving/assets INFO:absl:Training complete. Model written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving. ModelRun written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model_run/6 INFO:absl:Running publisher for Trainer INFO:absl:MetadataStore with DB connection initialized
TensorBoard ile Eğitimi Analiz Edin
Eğitmen eserine bir göz atın. Model alt dizinlerini içeren bir dizine işaret eder.
model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
['Format-Serving'] ['variables', 'assets', 'keras_metadata.pb', 'saved_model.pb']
İsteğe bağlı olarak, modelimizin eğitim eğrilerini analiz etmek için TensorBoard'u Trainer'a bağlayabiliriz.
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}
Değerlendirici
Evaluator
bileşeni değerlendirme kümesi üzerinden modeli performans ölçümlerini hesaplar. Bu kullanan TensorFlow Modeli Analizi kitaplığı. Evaluator
ayrıca isteğe bağlı olarak yeni eğitilmiş modeli daha iyi önceki modele göre olduğunu doğrulamak. Bu, bir modeli her gün otomatik olarak eğitip doğrulayabileceğiniz bir üretim hattı ayarında kullanışlıdır. Böylece bu defter, biz sadece bir modeli eğitmek Evaluator
otomatik olarak "iyi" olarak modelini etiketleyecektir.
Evaluator
girdi olarak veri alacak ExampleGen
gelen eğitimli modeli Trainer
ve dilimleme yapılandırmayı. Dilimleme yapılandırması, metriklerinizi özellik değerlerine göre dilimlemenize olanak tanır (örneğin, modeliniz sabah 8'de başlayıp akşam 8'de başlayan taksi yolculuklarında nasıl performans gösterir?). Aşağıda bu yapılandırmanın bir örneğine bakın:
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(
signature_name='serving_default',
label_key='tips',
preprocessing_function_names=['transform_features'],
)
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10})))
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(feature_keys=['trip_start_hour'])
])
Sonra, bu yapılandırmayı vermek Evaluator
ve çalıştırın.
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
context.run(model_resolver)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
context.run(evaluator)
INFO:absl:Running driver for latest_blessed_model_resolver INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running publisher for latest_blessed_model_resolver INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running driver for Evaluator INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for Evaluator INFO:absl:Nonempty beam arg extra_packages already includes dependency INFO:absl:udf_utils.get_fn {'eval_config': '{\n "metrics_specs": [\n {\n "metrics": [\n {\n "class_name": "ExampleCount"\n },\n {\n "class_name": "BinaryAccuracy",\n "threshold": {\n "change_threshold": {\n "absolute": -1e-10,\n "direction": "HIGHER_IS_BETTER"\n },\n "value_threshold": {\n "lower_bound": 0.5\n }\n }\n }\n ]\n }\n ],\n "model_specs": [\n {\n "label_key": "tips",\n "preprocessing_function_names": [\n "transform_features"\n ],\n "signature_name": "serving_default"\n }\n ],\n "slicing_specs": [\n {},\n {\n "feature_keys": [\n "trip_start_hour"\n ]\n }\n ]\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_eval_shared_model' INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config= model_specs { signature_name: "serving_default" label_key: "tips" preprocessing_function_names: "transform_features" } slicing_specs { } slicing_specs { feature_keys: "trip_start_hour" } metrics_specs { metrics { class_name: "ExampleCount" } metrics { class_name: "BinaryAccuracy" threshold { value_threshold { lower_bound { value: 0.5 } } } } } INFO:absl:Using /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6/Format-Serving as model. WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87bc0f5e50> and <keras.engine.input_layer.InputLayer object at 0x7f87bc0f5b50>). INFO:absl:The 'example_splits' parameter is not set, using 'eval' split. INFO:absl:Evaluating model. INFO:absl:udf_utils.get_fn {'eval_config': '{\n "metrics_specs": [\n {\n "metrics": [\n {\n "class_name": "ExampleCount"\n },\n {\n "class_name": "BinaryAccuracy",\n "threshold": {\n "change_threshold": {\n "absolute": -1e-10,\n "direction": "HIGHER_IS_BETTER"\n },\n "value_threshold": {\n "lower_bound": 0.5\n }\n }\n }\n ]\n }\n ],\n "model_specs": [\n {\n "label_key": "tips",\n "preprocessing_function_names": [\n "transform_features"\n ],\n "signature_name": "serving_default"\n }\n ],\n "slicing_specs": [\n {},\n {\n "feature_keys": [\n "trip_start_hour"\n ]\n }\n ]\n}', 'feature_slicing_spec': None, 'fairness_indicator_thresholds': 'null', 'example_splits': 'null', 'module_file': None, 'module_path': None} 'custom_extractors' INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config= model_specs { signature_name: "serving_default" label_key: "tips" preprocessing_function_names: "transform_features" } slicing_specs { } slicing_specs { feature_keys: "trip_start_hour" } metrics_specs { metrics { class_name: "ExampleCount" } metrics { class_name: "BinaryAccuracy" threshold { value_threshold { lower_bound { value: 0.5 } } } } model_names: "" } INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config= model_specs { signature_name: "serving_default" label_key: "tips" preprocessing_function_names: "transform_features" } slicing_specs { } slicing_specs { feature_keys: "trip_start_hour" } metrics_specs { metrics { class_name: "ExampleCount" } metrics { class_name: "BinaryAccuracy" threshold { value_threshold { lower_bound { value: 0.5 } } } } model_names: "" } INFO:absl:Request was made to ignore the baseline ModelSpec and any change thresholds. This is likely because a baseline model was not provided: updated_config= model_specs { signature_name: "serving_default" label_key: "tips" preprocessing_function_names: "transform_features" } slicing_specs { } slicing_specs { feature_keys: "trip_start_hour" } metrics_specs { metrics { class_name: "ExampleCount" } metrics { class_name: "BinaryAccuracy" threshold { value_threshold { lower_bound { value: 0.5 } } } } model_names: "" } WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b0102150> and <keras.engine.input_layer.InputLayer object at 0x7f875454e810>). WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b06c9d50> and <keras.engine.input_layer.InputLayer object at 0x7f87d4041290>). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f874c8d6a10> and <keras.engine.input_layer.InputLayer object at 0x7f874c8ac0d0>). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830dcf9fd0> and <keras.engine.input_layer.InputLayer object at 0x7f830dd87110>). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830dc8cad0> and <keras.engine.input_layer.InputLayer object at 0x7f830cf892d0>). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f87b041add0> and <keras.engine.input_layer.InputLayer object at 0x7f874d6b6d50>). WARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program. Two checkpoint references resolved to different objects (<keras.saving.saved_model.load.TensorFlowTransform>TransformFeaturesLayer object at 0x7f830c42a5d0> and <keras.engine.input_layer.InputLayer object at 0x7f830c3037d0>). INFO:absl:Evaluation complete. Results written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/evaluation/8. INFO:absl:Checking validation results. WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.7/site-packages/tensorflow_model_analysis/writers/metrics_plots_and_validations_writer.py:107: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)` INFO:absl:Blessing result True written to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/blessing/8. INFO:absl:Running publisher for Evaluator INFO:absl:MetadataStore with DB connection initialized
Şimdi çıkış eserler inceleyelim Evaluator
.
evaluator.outputs
{'evaluation': Channel( type_name: ModelEvaluation artifacts: [Artifact(artifact: id: 15 type_id: 29 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/evaluation/8" custom_properties { key: "name" value { string_value: "evaluation" } } custom_properties { key: "producer_component" value { string_value: "Evaluator" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 29 name: "ModelEvaluation" )] additional_properties: {} additional_custom_properties: {} ), 'blessing': Channel( type_name: ModelBlessing artifacts: [Artifact(artifact: id: 16 type_id: 30 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Evaluator/blessing/8" custom_properties { key: "blessed" value { int_value: 1 } } custom_properties { key: "current_model" value { string_value: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Trainer/model/6" } } custom_properties { key: "current_model_id" value { int_value: 13 } } custom_properties { key: "name" value { string_value: "blessing" } } custom_properties { key: "producer_component" value { string_value: "Evaluator" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 30 name: "ModelBlessing" )] additional_properties: {} additional_custom_properties: {} )}
Kullanılması evaluation
çıkışını biz tüm değerlendirme sette küresel ölçütlerinin varsayılan görselleştirme gösterebilir.
context.show(evaluator.outputs['evaluation'])
Dilimlenmiş değerlendirme metriklerinin görselleştirmesini görmek için doğrudan TensorFlow Model Analizi kitaplığını çağırabiliriz.
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
SlicingMetricsViewer(config={'weightedExamplesColumn': 'example_count'}, data=[{'slice': 'trip_start_hour:19',…
Bu görselleştirme aynı ölçümleri gösterir, ancak her özellik değerinde hesaplanan trip_start_hour
tamamı yerine değerlendirme sette.
TensorFlow Model Analizi, Adalet Göstergeleri ve bir zaman serisi model performansının çizilmesi gibi diğer birçok görselleştirmeyi destekler. Daha fazla bilgi için bkz öğretici .
Yapılandırmamıza eşikler eklediğimiz için doğrulama çıktısı da mevcuttur. A precence blessing
eser modelimiz doğrulama geçti gösterir. Bu gerçekleştirilen ilk doğrulama olduğundan aday otomatik olarak kutsanmış olur.
blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
total 0 -rw-rw-r-- 1 kbuilder kbuilder 0 Dec 21 10:13 BLESSED
Artık doğrulama sonucu kaydını yükleyerek de başarıyı doğrulayabilir:
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
validation_ok: true validation_details { slicing_details { slicing_spec { } num_matching_slices: 25 } }
itici
Pusher
bileşeni TFX boru hattının sonunda genellikle. Bu bir model ihracat modeli doğrulama geçti ve eğer öyleyse olup olmadığını denetler _serving_model_dir
.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher)
INFO:absl:Running driver for Pusher INFO:absl:MetadataStore with DB connection initialized INFO:absl:Running executor for Pusher INFO:absl:Model version: 1640081600 INFO:absl:Model written to serving path /tmp/tmpkvhhk5j5/serving_model/taxi_simple/1640081600. INFO:absl:Model pushed to /tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Pusher/pushed_model/9. INFO:absl:Running publisher for Pusher INFO:absl:MetadataStore with DB connection initialized
Diyelim çıkış eserler incelemek Pusher
.
pusher.outputs
{'pushed_model': Channel( type_name: PushedModel artifacts: [Artifact(artifact: id: 17 type_id: 32 uri: "/tmp/tfx-interactive-2021-12-21T10_09_51.902969-bvucg0eq/Pusher/pushed_model/9" custom_properties { key: "name" value { string_value: "pushed_model" } } custom_properties { key: "producer_component" value { string_value: "Pusher" } } custom_properties { key: "pushed" value { int_value: 1 } } custom_properties { key: "pushed_destination" value { string_value: "/tmp/tmpkvhhk5j5/serving_model/taxi_simple/1640081600" } } custom_properties { key: "pushed_version" value { string_value: "1640081600" } } custom_properties { key: "state" value { string_value: "published" } } custom_properties { key: "tfx_version" value { string_value: "1.5.0" } } state: LIVE , artifact_type: id: 32 name: "PushedModel" )] additional_properties: {} additional_custom_properties: {} )}
Özellikle, Pusher, modelinizi şuna benzeyen SavedModel biçiminde dışa aktarır:
push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
('serving_default', <ConcreteFunction signature_wrapper(*, examples) at 0x7F82F31FDE50>) ('transform_features', <ConcreteFunction signature_wrapper(*, examples) at 0x7F82F31AC410>)
Yerleşik TFX bileşenleri turumuzu bitirdik!