Graph-based Neural Structured Learning in TFX

This tutorial describes graph regularization from the Neural Structured Learning framework and demonstrates an end-to-end workflow for sentiment classification in a TFX pipeline.

View on TensorFlow.org Run in Google Colab View on GitHub Download notebook See TF Hub model

Overview

This notebook classifies movie reviews as positive or negative using the text of the review. This is an example of binary classification, an important and widely applicable kind of machine learning problem.

We will demonstrate the use of graph regularization in this notebook by building a graph from the given input. The general recipe for building a graph-regularized model using the Neural Structured Learning (NSL) framework when the input does not contain an explicit graph is as follows:

  1. Create embeddings for each text sample in the input. This can be done using pre-trained models such as word2vec, Swivel, BERT etc.
  2. Build a graph based on these embeddings by using a similarity metric such as the 'L2' distance, 'cosine' distance, etc. Nodes in the graph correspond to samples and edges in the graph correspond to similarity between pairs of samples.
  3. Generate training data from the above synthesized graph and sample features. The resulting training data will contain neighbor features in addition to the original node features.
  4. Create a neural network as a base model using Estimators.
  5. Wrap the base model with the add_graph_regularization wrapper function, which is provided by the NSL framework, to create a new graph Estimator model. This new model will include a graph regularization loss as the regularization term in its training objective.
  6. Train and evaluate the graph Estimator model.

In this tutorial, we integrate the above workflow in a TFX pipeline using several custom TFX components as well as a custom graph-regularized trainer component.

Below is the schematic for our TFX pipeline. Orange boxes represent off-the-shelf TFX components and pink boxes represent custom TFX components.

TFX Pipeline

Upgrade Pip

To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.

import sys
if 'google.colab' in sys.modules:
  !pip install --upgrade pip

Install Required Packages

!pip install -q \
  tfx \
  neural-structured-learning \
  tensorflow-hub \
  tensorflow-datasets

Did you restart the runtime?

If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.

Dependencies and imports

import apache_beam as beam
import gzip as gzip_lib
import numpy as np
import os
import pprint
import shutil
import tempfile
import urllib
import uuid
pp = pprint.PrettyPrinter()

import tensorflow as tf
import neural_structured_learning as nsl

import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.import_example_gen.component import ImportExampleGen
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.model_validator.component import ModelValidator
from tfx.components.pusher.component import Pusher
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer import executor as trainer_executor
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.dsl.components.base import executor_spec
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import example_gen_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2

from tfx.types import artifact
from tfx.types import artifact_utils
from tfx.types import channel
from tfx.types import standard_artifacts
from tfx.types.standard_artifacts import Examples

from tfx.dsl.component.experimental.annotations import InputArtifact
from tfx.dsl.component.experimental.annotations import OutputArtifact
from tfx.dsl.component.experimental.annotations import Parameter
from tfx.dsl.component.experimental.decorators import component

from tensorflow_metadata.proto.v0 import anomalies_pb2
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_metadata.proto.v0 import statistics_pb2

import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import tensorflow_model_analysis as tfma
import tensorflow_hub as hub
import tensorflow_datasets as tfds

print("TF Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
    "GPU is",
    "available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
print("NSL Version: ", nsl.__version__)
print("TFX Version: ", tfx.__version__)
print("TFDV version: ", tfdv.__version__)
print("TFT version: ", tft.__version__)
print("TFMA version: ", tfma.__version__)
print("Hub version: ", hub.__version__)
print("Beam version: ", beam.__version__)
Using TensorFlow backend
TF Version:  2.13.1
Eager mode:  True
GPU is NOT AVAILABLE
NSL Version:  1.4.0
TFX Version:  1.14.0
TFDV version:  1.14.0
TFT version:  1.14.0
TFMA version:  0.45.0
Hub version:  0.13.0
Beam version:  2.50.0
2023-10-03 09:23:32.347312: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

IMDB dataset

The IMDB dataset contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews. Moreover, there are 50,000 additional unlabeled movie reviews.

Download preprocessed IMDB dataset

The following code downloads the IMDB dataset (or uses a cached copy if it has already been downloaded) using TFDS. To speed up this notebook we will use only 10,000 labeled reviews and 10,000 unlabeled reviews for training, and 10,000 test reviews for evaluation.

train_set, eval_set = tfds.load(
    "imdb_reviews:1.0.0",
    split=["train[:10000]+unsupervised[:10000]", "test[:10000]"],
    shuffle_files=False)

Let's look at a few reviews from the training set:

for tfrecord in train_set.take(4):
  print("Review: {}".format(tfrecord["text"].numpy().decode("utf-8")[:300]))
  print("Label: {}\n".format(tfrecord["label"].numpy()))
Review: This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda pi
Label: 0

Review: I have been known to fall asleep during films, but this is usually due to a combination of things including, really tired, being warm and comfortable on the sette and having just eaten a lot. However on this occasion I fell asleep because the film was rubbish. The plot development was constant. Cons
Label: 0

Review: Mann photographs the Alberta Rocky Mountains in a superb fashion, and Jimmy Stewart and Walter Brennan give enjoyable performances as they always seem to do. <br /><br />But come on Hollywood - a Mountie telling the people of Dawson City, Yukon to elect themselves a marshal (yes a marshal!) and to e
Label: 0

Review: This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas Cage (as always) gently row the plot along. There are no rapids to cr
Label: 1
def _dict_to_example(instance):
  """Decoded CSV to tf example."""
  feature = {}
  for key, value in instance.items():
    if value is None:
      feature[key] = tf.train.Feature()
    elif value.dtype == np.integer:
      feature[key] = tf.train.Feature(
          int64_list=tf.train.Int64List(value=value.tolist()))
    elif value.dtype == np.float32:
      feature[key] = tf.train.Feature(
          float_list=tf.train.FloatList(value=value.tolist()))
    else:
      feature[key] = tf.train.Feature(
          bytes_list=tf.train.BytesList(value=value.tolist()))
  return tf.train.Example(features=tf.train.Features(feature=feature))


examples_path = tempfile.mkdtemp(prefix="tfx-data")
train_path = os.path.join(examples_path, "train.tfrecord")
eval_path = os.path.join(examples_path, "eval.tfrecord")

for path, dataset in [(train_path, train_set), (eval_path, eval_set)]:
  with tf.io.TFRecordWriter(path) as writer:
    for example in dataset:
      writer.write(
          _dict_to_example({
              "label": np.array([example["label"].numpy()]),
              "text": np.array([example["text"].numpy()]),
          }).SerializeToString())
/tmpfs/tmp/ipykernel_109924/2578414870.py:7: DeprecationWarning: Converting `np.integer` or `np.signedinteger` to a dtype is deprecated. The current result is `np.dtype(np.int_)` which is not strictly correct. Note that the result depends on the system. To ensure stable results use may want to use `np.int64` or `np.int32`.
  elif value.dtype == np.integer:

Run TFX Components Interactively

In the cells that follow you will construct TFX components and run each one interactively within the InteractiveContext to obtain ExecutionResult objects. This mirrors the process of an orchestrator running components in a TFX DAG based on when the dependencies for each component are met.

context = InteractiveContext()
WARNING:absl:InteractiveContext pipeline_root argument not provided: using temporary directory /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h as root for pipeline outputs.
WARNING:absl:InteractiveContext metadata_connection_config not provided: using SQLite ML Metadata database at /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/metadata.sqlite.

The ExampleGen Component

In any ML development process the first step when starting code development is to ingest the training and test datasets. The ExampleGen component brings data into the TFX pipeline.

Create an ExampleGen component and run it.

input_config = example_gen_pb2.Input(splits=[
    example_gen_pb2.Input.Split(name='train', pattern='train.tfrecord'),
    example_gen_pb2.Input.Split(name='eval', pattern='eval.tfrecord')
])

example_gen = ImportExampleGen(input_base=examples_path, input_config=input_config)

context.run(example_gen, enable_cache=True)
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.
for artifact in example_gen.outputs['examples'].get():
  print(artifact)

print('\nexample_gen.outputs is a {}'.format(type(example_gen.outputs)))
print(example_gen.outputs)

print(example_gen.outputs['examples'].get()[0].split_names)
Artifact(artifact: id: 1
type_id: 14
uri: "/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/ImportExampleGen/examples/1"
properties {
  key: "split_names"
  value {
    string_value: "[\"train\", \"eval\"]"
  }
}
custom_properties {
  key: "file_format"
  value {
    string_value: "tfrecords_gzip"
  }
}
custom_properties {
  key: "input_fingerprint"
  value {
    string_value: "split:train,num_files:1,total_bytes:27706811,xor_checksum:1696325021,sum_checksum:1696325021\nsplit:eval,num_files:1,total_bytes:13374744,xor_checksum:1696325024,sum_checksum:1696325024"
  }
}
custom_properties {
  key: "payload_format"
  value {
    string_value: "FORMAT_TF_EXAMPLE"
  }
}
custom_properties {
  key: "span"
  value {
    int_value: 0
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "1.14.0"
  }
}
state: LIVE
, artifact_type: id: 14
name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
base_type: DATASET
)

example_gen.outputs is a <class 'dict'>
{'examples': OutputChannel(artifact_type=Examples, producer_component_id=ImportExampleGen, output_key=examples, additional_properties={}, additional_custom_properties={}, _input_trigger=None}
["train", "eval"]

The component's outputs include 2 artifacts:

  • the training examples (10,000 labeled reviews + 10,000 unlabeled reviews)
  • the eval examples (10,000 labeled reviews)

The IdentifyExamples Custom Component

To use NSL, we will need each instance to have a unique ID. We create a custom component that adds such a unique ID to all instances across all splits. We leverage Apache Beam to be able to easily scale to large datasets if needed.

def make_example_with_unique_id(example, id_feature_name):
  """Adds a unique ID to the given `tf.train.Example` proto.

  This function uses Python's 'uuid' module to generate a universally unique
  identifier for each example.

  Args:
    example: An instance of a `tf.train.Example` proto.
    id_feature_name: The name of the feature in the resulting `tf.train.Example`
      that will contain the unique identifier.

  Returns:
    A new `tf.train.Example` proto that includes a unique identifier as an
    additional feature.
  """
  result = tf.train.Example()
  result.CopyFrom(example)
  unique_id = uuid.uuid4()
  result.features.feature.get_or_create(
      id_feature_name).bytes_list.MergeFrom(
          tf.train.BytesList(value=[str(unique_id).encode('utf-8')]))
  return result


@component
def IdentifyExamples(orig_examples: InputArtifact[Examples],
                     identified_examples: OutputArtifact[Examples],
                     id_feature_name: Parameter[str],
                     component_name: Parameter[str]) -> None:

  # Get a list of the splits in input_data
  splits_list = artifact_utils.decode_split_names(
      split_names=orig_examples.split_names)
  # For completeness, encode the splits names and payload_format.
  # We could also just use input_data.split_names.
  identified_examples.split_names = artifact_utils.encode_split_names(
      splits=splits_list)
  # TODO(b/168616829): Remove populating payload_format after tfx 0.25.0.
  identified_examples.set_string_custom_property(
      "payload_format",
      orig_examples.get_string_custom_property("payload_format"))


  for split in splits_list:
    input_dir = artifact_utils.get_split_uri([orig_examples], split)
    output_dir = artifact_utils.get_split_uri([identified_examples], split)
    os.mkdir(output_dir)
    with beam.Pipeline() as pipeline:
      (pipeline
       | 'ReadExamples' >> beam.io.ReadFromTFRecord(
           os.path.join(input_dir, '*'),
           coder=beam.coders.coders.ProtoCoder(tf.train.Example))
       | 'AddUniqueId' >> beam.Map(make_example_with_unique_id, id_feature_name)
       | 'WriteIdentifiedExamples' >> beam.io.WriteToTFRecord(
           file_path_prefix=os.path.join(output_dir, 'data_tfrecord'),
           coder=beam.coders.coders.ProtoCoder(tf.train.Example),
           file_name_suffix='.gz'))

  return
identify_examples = IdentifyExamples(
    orig_examples=example_gen.outputs['examples'],
    component_name=u'IdentifyExamples',
    id_feature_name=u'id')
context.run(identify_examples, enable_cache=False)

The StatisticsGen Component

The StatisticsGen component computes descriptive statistics for your dataset. The statistics that it generates can be visualized for review, and are used for example validation and to infer a schema.

Create a StatisticsGen component and run it.

# Computes statistics over data for visualization and example validation.
statistics_gen = StatisticsGen(
    examples=identify_examples.outputs["identified_examples"])
context.run(statistics_gen, enable_cache=True)

The SchemaGen Component

The SchemaGen component generates a schema for your data based on the statistics from StatisticsGen. It tries to infer the data types of each of your features, and the ranges of legal values for categorical features.

Create a SchemaGen component and run it.

# Generates schema based on statistics files.
schema_gen = SchemaGen(
    statistics=statistics_gen.outputs['statistics'], infer_feature_shape=False)
context.run(schema_gen, enable_cache=True)

The generated artifact is just a schema.pbtxt containing a text representation of a schema_pb2.Schema protobuf:

train_uri = schema_gen.outputs['schema'].get()[0].uri
schema_filename = os.path.join(train_uri, 'schema.pbtxt')
schema = tfx.utils.io_utils.parse_pbtxt_file(
    file_name=schema_filename, message=schema_pb2.Schema())

It can be visualized using tfdv.display_schema() (we will look at this in more detail in a subsequent lab):

tfdv.display_schema(schema)

The ExampleValidator Component

The ExampleValidator performs anomaly detection, based on the statistics from StatisticsGen and the schema from SchemaGen. It looks for problems such as missing values, values of the wrong type, or categorical values outside of the domain of acceptable values.

Create an ExampleValidator component and run it.

# Performs anomaly detection based on statistics and data schema.
validate_stats = ExampleValidator(
    statistics=statistics_gen.outputs['statistics'],
    schema=schema_gen.outputs['schema'])
context.run(validate_stats, enable_cache=False)

The SynthesizeGraph Component

Graph construction involves creating embeddings for text samples and then using a similarity function to compare the embeddings.

We will use pretrained Swivel embeddings to create embeddings in the tf.train.Example format for each sample in the input. We will store the resulting embeddings in the TFRecord format along with the sample's ID. This is important and will allow us match sample embeddings with corresponding nodes in the graph later.

Once we have the sample embeddings, we will use them to build a similarity graph, i.e, nodes in this graph will correspond to samples and edges in this graph will correspond to similarity between pairs of nodes.

Neural Structured Learning provides a graph building library to build a graph based on sample embeddings. It uses cosine similarity as the similarity measure to compare embeddings and build edges between them. It also allows us to specify a similarity threshold, which can be used to discard dissimilar edges from the final graph. In the following example, using 0.99 as the similarity threshold, we end up with a graph that has 111,066 bi-directional edges.

swivel_url = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1'
hub_layer = hub.KerasLayer(swivel_url, input_shape=[], dtype=tf.string)


def _bytes_feature(value):
  """Returns a bytes_list from a string / byte."""
  return tf.train.Feature(bytes_list=tf.train.BytesList(value=value))


def _float_feature(value):
  """Returns a float_list from a float / double."""
  return tf.train.Feature(float_list=tf.train.FloatList(value=value))


def create_embedding_example(example):
  """Create tf.Example containing the sample's embedding and its ID."""
  sentence_embedding = hub_layer(tf.sparse.to_dense(example['text']))

  # Flatten the sentence embedding back to 1-D.
  sentence_embedding = tf.reshape(sentence_embedding, shape=[-1])

  feature_dict = {
      'id': _bytes_feature(tf.sparse.to_dense(example['id']).numpy()),
      'embedding': _float_feature(sentence_embedding.numpy().tolist())
  }

  return tf.train.Example(features=tf.train.Features(feature=feature_dict))


def create_dataset(uri):
  tfrecord_filenames = [os.path.join(uri, name) for name in os.listdir(uri)]
  return tf.data.TFRecordDataset(tfrecord_filenames, compression_type='GZIP')


def create_embeddings(train_path, output_path):
  dataset = create_dataset(train_path)
  embeddings_path = os.path.join(output_path, 'embeddings.tfr')

  feature_map = {
      'label': tf.io.FixedLenFeature([], tf.int64),
      'id': tf.io.VarLenFeature(tf.string),
      'text': tf.io.VarLenFeature(tf.string)
  }

  with tf.io.TFRecordWriter(embeddings_path) as writer:
    for tfrecord in dataset:
      tensor_dict = tf.io.parse_single_example(tfrecord, feature_map)
      embedding_example = create_embedding_example(tensor_dict)
      writer.write(embedding_example.SerializeToString())


def build_graph(output_path, similarity_threshold):
  embeddings_path = os.path.join(output_path, 'embeddings.tfr')
  graph_path = os.path.join(output_path, 'graph.tsv')
  graph_builder_config = nsl.configs.GraphBuilderConfig(
      similarity_threshold=similarity_threshold,
      lsh_splits=32,
      lsh_rounds=15,
      random_seed=12345)
  nsl.tools.build_graph_from_config([embeddings_path], graph_path,
                                    graph_builder_config)
"""Custom Artifact type"""


class SynthesizedGraph(tfx.types.artifact.Artifact):
  """Output artifact of the SynthesizeGraph component"""
  TYPE_NAME = 'SynthesizedGraphPath'
  PROPERTIES = {
      'span': standard_artifacts.SPAN_PROPERTY,
      'split_names': standard_artifacts.SPLIT_NAMES_PROPERTY,
  }


@component
def SynthesizeGraph(identified_examples: InputArtifact[Examples],
                    synthesized_graph: OutputArtifact[SynthesizedGraph],
                    similarity_threshold: Parameter[float],
                    component_name: Parameter[str]) -> None:

  # Get a list of the splits in input_data
  splits_list = artifact_utils.decode_split_names(
      split_names=identified_examples.split_names)

  # We build a graph only based on the 'Split-train' split which includes both
  # labeled and unlabeled examples.
  train_input_examples_uri = os.path.join(identified_examples.uri,
                                          'Split-train')
  output_graph_uri = os.path.join(synthesized_graph.uri, 'Split-train')
  os.mkdir(output_graph_uri)

  print('Creating embeddings...')
  create_embeddings(train_input_examples_uri, output_graph_uri)

  print('Synthesizing graph...')
  build_graph(output_graph_uri, similarity_threshold)

  synthesized_graph.split_names = artifact_utils.encode_split_names(
      splits=['Split-train'])

  return
synthesize_graph = SynthesizeGraph(
    identified_examples=identify_examples.outputs['identified_examples'],
    component_name=u'SynthesizeGraph',
    similarity_threshold=0.99)
context.run(synthesize_graph, enable_cache=False)
Creating embeddings...
Synthesizing graph...
train_uri = synthesize_graph.outputs["synthesized_graph"].get()[0].uri
os.listdir(train_uri)
['Split-train']
graph_path = os.path.join(train_uri, "Split-train", "graph.tsv")
print("node 1\t\t\t\t\tnode 2\t\t\t\t\tsimilarity")
!head {graph_path}
print("...")
!tail {graph_path}
node 1                  node 2                  similarity
ae8231a5-eadc-4825-9478-9d13db0126ec    28ba6dd2-c01b-49a0-8905-916ca99d44f6    0.990838
28ba6dd2-c01b-49a0-8905-916ca99d44f6    ae8231a5-eadc-4825-9478-9d13db0126ec    0.990838
ae8231a5-eadc-4825-9478-9d13db0126ec    3643b674-575a-4f90-9d96-9374c4845c95    0.990184
3643b674-575a-4f90-9d96-9374c4845c95    ae8231a5-eadc-4825-9478-9d13db0126ec    0.990184
89efead7-a013-4490-8083-e652536aa0b7    e67005bb-5941-4007-9749-84c5bdbef3e2    0.990616
e67005bb-5941-4007-9749-84c5bdbef3e2    89efead7-a013-4490-8083-e652536aa0b7    0.990616
d75b6111-5eda-4b48-b9c9-1fd9320eb825    28ba6dd2-c01b-49a0-8905-916ca99d44f6    0.991234
28ba6dd2-c01b-49a0-8905-916ca99d44f6    d75b6111-5eda-4b48-b9c9-1fd9320eb825    0.991234
2e61ea88-f3f2-4ee6-8151-b40a75149a52    e67005bb-5941-4007-9749-84c5bdbef3e2    0.992586
e67005bb-5941-4007-9749-84c5bdbef3e2    2e61ea88-f3f2-4ee6-8151-b40a75149a52    0.992586
...
dcd0c93b-f6f3-4257-93f5-49c8a0e5e1ce    e1463486-38e8-455e-a3c6-94d7847d4c9e    0.990002
e1463486-38e8-455e-a3c6-94d7847d4c9e    dcd0c93b-f6f3-4257-93f5-49c8a0e5e1ce    0.990002
011037f2-63c3-4c35-b0c6-1b5682658d41    774e798e-0829-4a4a-9f25-240e65cfb4a8    0.991046
774e798e-0829-4a4a-9f25-240e65cfb4a8    011037f2-63c3-4c35-b0c6-1b5682658d41    0.991046
20cd431a-f113-4259-831a-39ce6561f44f    fc1b5a44-75fe-48da-a293-898657644e70    0.991198
fc1b5a44-75fe-48da-a293-898657644e70    20cd431a-f113-4259-831a-39ce6561f44f    0.991198
23cf9dc3-d865-4507-9ea2-d9383b310419    60ba35b5-eee6-492f-a705-973c182b99f3    0.990260
60ba35b5-eee6-492f-a705-973c182b99f3    23cf9dc3-d865-4507-9ea2-d9383b310419    0.990260
8908be08-babc-4f6c-84d6-276ef925311e    1bc94495-6292-4c09-aed0-54bd99cad329    0.991317
1bc94495-6292-4c09-aed0-54bd99cad329    8908be08-babc-4f6c-84d6-276ef925311e    0.991317
wc -l {graph_path}
222132 /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/SynthesizeGraph/synthesized_graph/6/Split-train/graph.tsv

The Transform Component

The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.

The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.

Each sample will include the following three features:

  1. id: The node ID of the sample.
  2. text_xf: An int64 list containing word IDs.
  3. label_xf: A singleton int64 identifying the target class of the review: 0=negative, 1=positive.

Let's define a module containing the preprocessing_fn() function that we will pass to the Transform component:

_transform_module_file = 'imdb_transform.py'
%%writefile {_transform_module_file}

import tensorflow as tf

import tensorflow_transform as tft

SEQUENCE_LENGTH = 100
VOCAB_SIZE = 10000
OOV_SIZE = 100

def tokenize_reviews(reviews, sequence_length=SEQUENCE_LENGTH):
  reviews = tf.strings.lower(reviews)
  reviews = tf.strings.regex_replace(reviews, r" '| '|^'|'$", " ")
  reviews = tf.strings.regex_replace(reviews, "[^a-z' ]", " ")
  tokens = tf.strings.split(reviews)[:, :sequence_length]
  start_tokens = tf.fill([tf.shape(reviews)[0], 1], "<START>")
  end_tokens = tf.fill([tf.shape(reviews)[0], 1], "<END>")
  tokens = tf.concat([start_tokens, tokens, end_tokens], axis=1)
  tokens = tokens[:, :sequence_length]
  tokens = tokens.to_tensor(default_value="<PAD>")
  pad = sequence_length - tf.shape(tokens)[1]
  tokens = tf.pad(tokens, [[0, 0], [0, pad]], constant_values="<PAD>")
  return tf.reshape(tokens, [-1, sequence_length])

def preprocessing_fn(inputs):
  """tf.transform's callback function for preprocessing inputs.

  Args:
    inputs: map from feature keys to raw not-yet-transformed features.

  Returns:
    Map from string feature key to transformed feature operations.
  """
  outputs = {}
  outputs["id"] = inputs["id"]
  tokens = tokenize_reviews(_fill_in_missing(inputs["text"], ''))
  outputs["text_xf"] = tft.compute_and_apply_vocabulary(
      tokens,
      top_k=VOCAB_SIZE,
      num_oov_buckets=OOV_SIZE)
  outputs["label_xf"] = _fill_in_missing(inputs["label"], -1)
  return outputs

def _fill_in_missing(x, default_value):
  """Replace missing values in a SparseTensor.

  Fills in missing values of `x` with the default_value.

  Args:
    x: A `SparseTensor` of rank 2.  Its dense shape should have size at most 1
      in the second dimension.
    default_value: the value with which to replace the missing values.

  Returns:
    A rank 1 tensor where missing values of `x` have been filled in.
  """
  if not isinstance(x, tf.sparse.SparseTensor):
    return x
  return tf.squeeze(
      tf.sparse.to_dense(
          tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
          default_value),
      axis=1)
Writing imdb_transform.py

Create and run the Transform component, referring to the files that were created above.

# Performs transformations and feature engineering in training and serving.
transform = Transform(
    examples=identify_examples.outputs['identified_examples'],
    schema=schema_gen.outputs['schema'],
    module_file=_transform_module_file)
context.run(transform, enable_cache=True)
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying imdb_transform.py -> build/lib
installing to /tmpfs/tmp/tmp13lq3amq
running install
running install_lib
copying build/lib/imdb_transform.py -> /tmpfs/tmp/tmp13lq3amq
running install_egg_info
running egg_info
creating tfx_user_code_Transform.egg-info
writing tfx_user_code_Transform.egg-info/PKG-INFO
writing dependency_links to tfx_user_code_Transform.egg-info/dependency_links.txt
writing top-level names to tfx_user_code_Transform.egg-info/top_level.txt
writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'
reading manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'
writing manifest file 'tfx_user_code_Transform.egg-info/SOURCES.txt'
Copying tfx_user_code_Transform.egg-info to /tmpfs/tmp/tmp13lq3amq/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50-py3.9.egg-info
running install_scripts
creating /tmpfs/tmp/tmp13lq3amq/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50.dist-info/WHEEL
creating '/tmpfs/tmp/tmpp2kdnhtn/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50-py3-none-any.whl' and adding '/tmpfs/tmp/tmp13lq3amq' to it
adding 'imdb_transform.py'
adding 'tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50.dist-info/METADATA'
adding 'tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50.dist-info/WHEEL'
adding 'tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50.dist-info/top_level.txt'
adding 'tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50.dist-info/RECORD'
removing /tmpfs/tmp/tmp13lq3amq
/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` directly.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
        ********************************************************************************

!!
  self.initialize_options()
Processing /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/_wheels/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50-py3-none-any.whl
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50
Processing /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/_wheels/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50-py3-none-any.whl
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50
Processing /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/_wheels/tfx_user_code_Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50-py3-none-any.whl
Installing collected packages: tfx-user-code-Transform
Successfully installed tfx-user-code-Transform-0.0+074f608d1f54105225e2fee77ebe4b6159a009eca01b5a0791099840a2185d50
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
WARNING:absl:Tables initialized inside a tf.function  will be re-initialized on every invocation of the function. This  re-initialization can have significant impact on performance. Consider lifting  them out of the graph context using  `tf.init_scope`.: compute_and_apply_vocabulary/apply_vocab/text_file_init/InitializeTableFromTextFileV2
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transform_graph/7/.temp_path/tftransform_tmp/b08d9a5b32234732af4d54c9bccd328f/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transform_graph/7/.temp_path/tftransform_tmp/b08d9a5b32234732af4d54c9bccd328f/assets
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transform_graph/7/.temp_path/tftransform_tmp/507898dab93747748a48d74d1a119ac2/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transform_graph/7/.temp_path/tftransform_tmp/507898dab93747748a48d74d1a119ac2/assets
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_text is not available.

The Transform component has 2 types of outputs:

  • transform_graph is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
  • transformed_examples represents the preprocessed training and evaluation data.
transform.outputs
{'transform_graph': OutputChannel(artifact_type=TransformGraph, producer_component_id=Transform, output_key=transform_graph, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'transformed_examples': OutputChannel(artifact_type=Examples, producer_component_id=Transform, output_key=transformed_examples, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'updated_analyzer_cache': OutputChannel(artifact_type=TransformCache, producer_component_id=Transform, output_key=updated_analyzer_cache, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'pre_transform_schema': OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=pre_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'pre_transform_stats': OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=pre_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'post_transform_schema': OutputChannel(artifact_type=Schema, producer_component_id=Transform, output_key=post_transform_schema, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'post_transform_stats': OutputChannel(artifact_type=ExampleStatistics, producer_component_id=Transform, output_key=post_transform_stats, additional_properties={}, additional_custom_properties={}, _input_trigger=None,
 'post_transform_anomalies': OutputChannel(artifact_type=ExampleAnomalies, producer_component_id=Transform, output_key=post_transform_anomalies, additional_properties={}, additional_custom_properties={}, _input_trigger=None}

Take a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories:

train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
['transform_fn', 'metadata', 'transformed_metadata']

The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data. The transformed_metadata subdirectory contains the schema of the preprocessed data.

Take a look at some of the transformed examples and check that they are indeed processed as intended.

def pprint_examples(artifact, n_examples=3):
  print("artifact:", artifact)
  uri = os.path.join(artifact.uri, "Split-train")
  print("uri:", uri)
  tfrecord_filenames = [os.path.join(uri, name) for name in os.listdir(uri)]
  print("tfrecord_filenames:", tfrecord_filenames)
  dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
  for tfrecord in dataset.take(n_examples):
    serialized_example = tfrecord.numpy()
    example = tf.train.Example.FromString(serialized_example)
    pp.pprint(example)
pprint_examples(transform.outputs['transformed_examples'].get()[0])
artifact: Artifact(artifact: id: 8
type_id: 14
uri: "/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transformed_examples/7"
properties {
  key: "split_names"
  value {
    string_value: "[\"eval\", \"train\"]"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "transformed_examples:2023-10-03T09:25:30.032573"
  }
}
custom_properties {
  key: "producer_component"
  value {
    string_value: "Transform"
  }
}
custom_properties {
  key: "tfx_version"
  value {
    string_value: "1.14.0"
  }
}
state: LIVE
name: "transformed_examples:2023-10-03T09:25:30.032573"
, artifact_type: id: 14
name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
base_type: DATASET
)
uri: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transformed_examples/7/Split-train
tfrecord_filenames: ['/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Transform/transformed_examples/7/Split-train/transformed_examples-00000-of-00001.gz']
features {
  feature {
    key: "id"
    value {
      bytes_list {
        value: "08838f43-e3c7-466e-8dee-88f92f4a4cb6"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 8
        value: 14
        value: 32
        value: 338
        value: 310
        value: 15
        value: 95
        value: 27
        value: 10001
        value: 9
        value: 31
        value: 1173
        value: 3153
        value: 43
        value: 495
        value: 10060
        value: 214
        value: 26
        value: 71
        value: 142
        value: 19
        value: 8
        value: 204
        value: 339
        value: 27
        value: 74
        value: 181
        value: 238
        value: 9
        value: 440
        value: 67
        value: 74
        value: 71
        value: 94
        value: 100
        value: 22
        value: 5442
        value: 8
        value: 1573
        value: 607
        value: 530
        value: 8
        value: 15
        value: 6
        value: 32
        value: 378
        value: 6292
        value: 207
        value: 2276
        value: 388
        value: 0
        value: 84
        value: 1023
        value: 154
        value: 65
        value: 155
        value: 52
        value: 0
        value: 10080
        value: 7871
        value: 65
        value: 250
        value: 74
        value: 3202
        value: 20
        value: 10000
        value: 3720
        value: 10020
        value: 10008
        value: 1282
        value: 3862
        value: 3
        value: 53
        value: 3952
        value: 110
        value: 1879
        value: 17
        value: 3153
        value: 14
        value: 166
        value: 19
        value: 2
        value: 1023
        value: 1007
        value: 9405
        value: 9
        value: 2
        value: 15
        value: 12
        value: 14
        value: 4504
        value: 4
        value: 109
        value: 158
        value: 1202
        value: 7
        value: 174
        value: 505
        value: 12
      }
    }
  }
}

features {
  feature {
    key: "id"
    value {
      bytes_list {
        value: "3ddb63c5-a004-4d51-ae3c-cb1d83d21a4b"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 7
        value: 23
        value: 75
        value: 494
        value: 5
        value: 748
        value: 2155
        value: 307
        value: 91
        value: 19
        value: 8
        value: 6
        value: 499
        value: 763
        value: 5
        value: 2
        value: 1690
        value: 4
        value: 200
        value: 593
        value: 57
        value: 1244
        value: 120
        value: 2364
        value: 3
        value: 4407
        value: 21
        value: 0
        value: 10081
        value: 3
        value: 263
        value: 42
        value: 6947
        value: 2
        value: 169
        value: 185
        value: 21
        value: 8
        value: 5143
        value: 7
        value: 1339
        value: 2155
        value: 81
        value: 0
        value: 18
        value: 14
        value: 1468
        value: 0
        value: 86
        value: 986
        value: 14
        value: 2259
        value: 1790
        value: 562
        value: 3
        value: 284
        value: 200
        value: 401
        value: 5
        value: 668
        value: 19
        value: 17
        value: 58
        value: 1934
        value: 4
        value: 45
        value: 14
        value: 4212
        value: 113
        value: 43
        value: 135
        value: 7
        value: 753
        value: 7
        value: 224
        value: 23
        value: 1155
        value: 179
        value: 4
        value: 0
        value: 18
        value: 19
        value: 7
        value: 191
        value: 0
        value: 2047
        value: 4
        value: 10
        value: 3
        value: 283
        value: 42
        value: 401
        value: 5
        value: 668
        value: 4
        value: 90
        value: 234
        value: 10023
        value: 227
      }
    }
  }
}

features {
  feature {
    key: "id"
    value {
      bytes_list {
        value: "beb45217-b3be-4183-a095-99e72610317e"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 4577
        value: 7158
        value: 0
        value: 10047
        value: 3778
        value: 3346
        value: 9
        value: 2
        value: 758
        value: 1915
        value: 3
        value: 2280
        value: 1511
        value: 3
        value: 2003
        value: 10020
        value: 225
        value: 786
        value: 382
        value: 16
        value: 39
        value: 203
        value: 361
        value: 5
        value: 93
        value: 11
        value: 11
        value: 19
        value: 220
        value: 21
        value: 341
        value: 2
        value: 10000
        value: 966
        value: 0
        value: 77
        value: 4
        value: 6677
        value: 464
        value: 10071
        value: 5
        value: 10042
        value: 630
        value: 2
        value: 10044
        value: 404
        value: 2
        value: 10044
        value: 3
        value: 5
        value: 10008
        value: 0
        value: 1259
        value: 630
        value: 106
        value: 10042
        value: 6721
        value: 10
        value: 49
        value: 21
        value: 0
        value: 2071
        value: 20
        value: 1292
        value: 4
        value: 0
        value: 431
        value: 11
        value: 11
        value: 166
        value: 67
        value: 2342
        value: 5815
        value: 12
        value: 575
        value: 21
        value: 0
        value: 1691
        value: 537
        value: 4
        value: 0
        value: 3605
        value: 307
        value: 0
        value: 10054
        value: 1563
        value: 3115
        value: 467
        value: 4577
        value: 3
        value: 1069
        value: 1158
        value: 5
        value: 23
        value: 4279
        value: 6677
        value: 464
        value: 20
        value: 10004
      }
    }
  }
}

The GraphAugmentation Component

Since we have the sample features and the synthesized graph, we can generate the augmented training data for Neural Structured Learning. The NSL framework provides a library to combine the graph and the sample features to produce the final training data for graph regularization. The resulting training data will include original sample features as well as features of their corresponding neighbors.

In this tutorial, we consider undirected edges and use a maximum of 3 neighbors per sample to augment training data with graph neighbors.

def split_train_and_unsup(input_uri):
  'Separate the labeled and unlabeled instances.'

  tmp_dir = tempfile.mkdtemp(prefix='tfx-data')
  tfrecord_filenames = [
      os.path.join(input_uri, filename) for filename in os.listdir(input_uri)
  ]
  train_path = os.path.join(tmp_dir, 'train.tfrecord')
  unsup_path = os.path.join(tmp_dir, 'unsup.tfrecord')
  with tf.io.TFRecordWriter(train_path) as train_writer, \
       tf.io.TFRecordWriter(unsup_path) as unsup_writer:
    for tfrecord in tf.data.TFRecordDataset(
        tfrecord_filenames, compression_type='GZIP'):
      example = tf.train.Example()
      example.ParseFromString(tfrecord.numpy())
      if ('label_xf' not in example.features.feature or
          example.features.feature['label_xf'].int64_list.value[0] == -1):
        writer = unsup_writer
      else:
        writer = train_writer
      writer.write(tfrecord.numpy())
  return train_path, unsup_path


def gzip(filepath):
  with open(filepath, 'rb') as f_in:
    with gzip_lib.open(filepath + '.gz', 'wb') as f_out:
      shutil.copyfileobj(f_in, f_out)
  os.remove(filepath)


def copy_tfrecords(input_uri, output_uri):
  for filename in os.listdir(input_uri):
    input_filename = os.path.join(input_uri, filename)
    output_filename = os.path.join(output_uri, filename)
    shutil.copyfile(input_filename, output_filename)


@component
def GraphAugmentation(identified_examples: InputArtifact[Examples],
                      synthesized_graph: InputArtifact[SynthesizedGraph],
                      augmented_examples: OutputArtifact[Examples],
                      num_neighbors: Parameter[int],
                      component_name: Parameter[str]) -> None:

  # Get a list of the splits in input_data
  splits_list = artifact_utils.decode_split_names(
      split_names=identified_examples.split_names)

  train_input_uri = os.path.join(identified_examples.uri, 'Split-train')
  eval_input_uri = os.path.join(identified_examples.uri, 'Split-eval')
  train_graph_uri = os.path.join(synthesized_graph.uri, 'Split-train')
  train_output_uri = os.path.join(augmented_examples.uri, 'Split-train')
  eval_output_uri = os.path.join(augmented_examples.uri, 'Split-eval')

  os.mkdir(train_output_uri)
  os.mkdir(eval_output_uri)

  # Separate the labeled and unlabeled examples from the 'Split-train' split.
  train_path, unsup_path = split_train_and_unsup(train_input_uri)

  output_path = os.path.join(train_output_uri, 'nsl_train_data.tfr')
  pack_nbrs_args = dict(
      labeled_examples_path=train_path,
      unlabeled_examples_path=unsup_path,
      graph_path=os.path.join(train_graph_uri, 'graph.tsv'),
      output_training_data_path=output_path,
      add_undirected_edges=True,
      max_nbrs=num_neighbors)
  print('nsl.tools.pack_nbrs arguments:', pack_nbrs_args)
  nsl.tools.pack_nbrs(**pack_nbrs_args)

  # Downstream components expect gzip'ed TFRecords.
  gzip(output_path)

  # The test examples are left untouched and are simply copied over.
  copy_tfrecords(eval_input_uri, eval_output_uri)

  augmented_examples.split_names = identified_examples.split_names

  return
# Augments training data with graph neighbors.
graph_augmentation = GraphAugmentation(
    identified_examples=transform.outputs['transformed_examples'],
    synthesized_graph=synthesize_graph.outputs['synthesized_graph'],
    component_name=u'GraphAugmentation',
    num_neighbors=3)
context.run(graph_augmentation, enable_cache=False)
nsl.tools.pack_nbrs arguments: {'labeled_examples_path': '/tmpfs/tmp/tfx-datao31m1a90/train.tfrecord', 'unlabeled_examples_path': '/tmpfs/tmp/tfx-datao31m1a90/unsup.tfrecord', 'graph_path': '/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/SynthesizeGraph/synthesized_graph/6/Split-train/graph.tsv', 'output_training_data_path': '/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/GraphAugmentation/augmented_examples/8/Split-train/nsl_train_data.tfr', 'add_undirected_edges': True, 'max_nbrs': 3}
pprint_examples(graph_augmentation.outputs['augmented_examples'].get()[0], 6)
artifact: Artifact(artifact: id: 15
type_id: 14
uri: "/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/GraphAugmentation/augmented_examples/8"
properties {
  key: "split_names"
  value {
    string_value: "[\"eval\", \"train\"]"
  }
}
custom_properties {
  key: "name"
  value {
    string_value: "augmented_examples:2023-10-03T09:26:02.739138"
  }
}
custom_properties {
  key: "producer_component"
  value {
    string_value: "GraphAugmentation"
  }
}
name: "augmented_examples:2023-10-03T09:26:02.739138"
, artifact_type: id: 14
name: "Examples"
properties {
  key: "span"
  value: INT
}
properties {
  key: "split_names"
  value: STRING
}
properties {
  key: "version"
  value: INT
}
base_type: DATASET
)
uri: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/GraphAugmentation/augmented_examples/8/Split-train
tfrecord_filenames: ['/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/GraphAugmentation/augmented_examples/8/Split-train/nsl_train_data.tfr.gz']
features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "08838f43-e3c7-466e-8dee-88f92f4a4cb6"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 8
        value: 14
        value: 32
        value: 338
        value: 310
        value: 15
        value: 95
        value: 27
        value: 10001
        value: 9
        value: 31
        value: 1173
        value: 3153
        value: 43
        value: 495
        value: 10060
        value: 214
        value: 26
        value: 71
        value: 142
        value: 19
        value: 8
        value: 204
        value: 339
        value: 27
        value: 74
        value: 181
        value: 238
        value: 9
        value: 440
        value: 67
        value: 74
        value: 71
        value: 94
        value: 100
        value: 22
        value: 5442
        value: 8
        value: 1573
        value: 607
        value: 530
        value: 8
        value: 15
        value: 6
        value: 32
        value: 378
        value: 6292
        value: 207
        value: 2276
        value: 388
        value: 0
        value: 84
        value: 1023
        value: 154
        value: 65
        value: 155
        value: 52
        value: 0
        value: 10080
        value: 7871
        value: 65
        value: 250
        value: 74
        value: 3202
        value: 20
        value: 10000
        value: 3720
        value: 10020
        value: 10008
        value: 1282
        value: 3862
        value: 3
        value: 53
        value: 3952
        value: 110
        value: 1879
        value: 17
        value: 3153
        value: 14
        value: 166
        value: 19
        value: 2
        value: 1023
        value: 1007
        value: 9405
        value: 9
        value: 2
        value: 15
        value: 12
        value: 14
        value: 4504
        value: 4
        value: 109
        value: 158
        value: 1202
        value: 7
        value: 174
        value: 505
        value: 12
      }
    }
  }
}

features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "3ddb63c5-a004-4d51-ae3c-cb1d83d21a4b"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 7
        value: 23
        value: 75
        value: 494
        value: 5
        value: 748
        value: 2155
        value: 307
        value: 91
        value: 19
        value: 8
        value: 6
        value: 499
        value: 763
        value: 5
        value: 2
        value: 1690
        value: 4
        value: 200
        value: 593
        value: 57
        value: 1244
        value: 120
        value: 2364
        value: 3
        value: 4407
        value: 21
        value: 0
        value: 10081
        value: 3
        value: 263
        value: 42
        value: 6947
        value: 2
        value: 169
        value: 185
        value: 21
        value: 8
        value: 5143
        value: 7
        value: 1339
        value: 2155
        value: 81
        value: 0
        value: 18
        value: 14
        value: 1468
        value: 0
        value: 86
        value: 986
        value: 14
        value: 2259
        value: 1790
        value: 562
        value: 3
        value: 284
        value: 200
        value: 401
        value: 5
        value: 668
        value: 19
        value: 17
        value: 58
        value: 1934
        value: 4
        value: 45
        value: 14
        value: 4212
        value: 113
        value: 43
        value: 135
        value: 7
        value: 753
        value: 7
        value: 224
        value: 23
        value: 1155
        value: 179
        value: 4
        value: 0
        value: 18
        value: 19
        value: 7
        value: 191
        value: 0
        value: 2047
        value: 4
        value: 10
        value: 3
        value: 283
        value: 42
        value: 401
        value: 5
        value: 668
        value: 4
        value: 90
        value: 234
        value: 10023
        value: 227
      }
    }
  }
}

features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "beb45217-b3be-4183-a095-99e72610317e"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 4577
        value: 7158
        value: 0
        value: 10047
        value: 3778
        value: 3346
        value: 9
        value: 2
        value: 758
        value: 1915
        value: 3
        value: 2280
        value: 1511
        value: 3
        value: 2003
        value: 10020
        value: 225
        value: 786
        value: 382
        value: 16
        value: 39
        value: 203
        value: 361
        value: 5
        value: 93
        value: 11
        value: 11
        value: 19
        value: 220
        value: 21
        value: 341
        value: 2
        value: 10000
        value: 966
        value: 0
        value: 77
        value: 4
        value: 6677
        value: 464
        value: 10071
        value: 5
        value: 10042
        value: 630
        value: 2
        value: 10044
        value: 404
        value: 2
        value: 10044
        value: 3
        value: 5
        value: 10008
        value: 0
        value: 1259
        value: 630
        value: 106
        value: 10042
        value: 6721
        value: 10
        value: 49
        value: 21
        value: 0
        value: 2071
        value: 20
        value: 1292
        value: 4
        value: 0
        value: 431
        value: 11
        value: 11
        value: 166
        value: 67
        value: 2342
        value: 5815
        value: 12
        value: 575
        value: 21
        value: 0
        value: 1691
        value: 537
        value: 4
        value: 0
        value: 3605
        value: 307
        value: 0
        value: 10054
        value: 1563
        value: 3115
        value: 467
        value: 4577
        value: 3
        value: 1069
        value: 1158
        value: 5
        value: 23
        value: 4279
        value: 6677
        value: 464
        value: 20
        value: 10004
      }
    }
  }
}

features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "a2ef1885-c46b-4b3a-9b2d-9d12d5f60345"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 1
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 8
        value: 6
        value: 0
        value: 251
        value: 4
        value: 18
        value: 20
        value: 2
        value: 6783
        value: 2295
        value: 2338
        value: 52
        value: 0
        value: 468
        value: 4
        value: 0
        value: 189
        value: 73
        value: 153
        value: 1294
        value: 17
        value: 90
        value: 234
        value: 935
        value: 16
        value: 25
        value: 10024
        value: 92
        value: 2
        value: 192
        value: 4218
        value: 3317
        value: 3
        value: 10098
        value: 20
        value: 2
        value: 356
        value: 4
        value: 565
        value: 334
        value: 382
        value: 36
        value: 6989
        value: 3
        value: 6065
        value: 2510
        value: 16
        value: 203
        value: 7264
        value: 2849
        value: 0
        value: 86
        value: 346
        value: 50
        value: 26
        value: 58
        value: 10020
        value: 5
        value: 1464
        value: 58
        value: 2081
        value: 2969
        value: 42
        value: 2
        value: 2364
        value: 3
        value: 1402
        value: 10062
        value: 138
        value: 147
        value: 614
        value: 115
        value: 29
        value: 90
        value: 105
        value: 2
        value: 223
        value: 18
        value: 9
        value: 160
        value: 324
        value: 3
        value: 24
        value: 12
        value: 1252
        value: 0
        value: 2142
        value: 10
        value: 1832
        value: 111
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
      }
    }
  }
}

features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "47b26402-2483-49b2-bcd6-242bb2445cf2"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 1
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 16
        value: 423
        value: 23
        value: 1367
        value: 30
        value: 0
        value: 363
        value: 12
        value: 153
        value: 3174
        value: 9
        value: 8
        value: 18
        value: 26
        value: 667
        value: 338
        value: 1372
        value: 0
        value: 86
        value: 46
        value: 9200
        value: 282
        value: 0
        value: 10091
        value: 4
        value: 0
        value: 694
        value: 10028
        value: 52
        value: 362
        value: 26
        value: 202
        value: 39
        value: 216
        value: 5
        value: 27
        value: 5822
        value: 19
        value: 52
        value: 58
        value: 362
        value: 26
        value: 202
        value: 39
        value: 474
        value: 0
        value: 10029
        value: 4
        value: 2
        value: 243
        value: 143
        value: 386
        value: 3
        value: 0
        value: 386
        value: 579
        value: 2
        value: 132
        value: 57
        value: 725
        value: 88
        value: 140
        value: 30
        value: 27
        value: 33
        value: 1359
        value: 29
        value: 8
        value: 567
        value: 35
        value: 106
        value: 230
        value: 60
        value: 0
        value: 3041
        value: 5
        value: 7879
        value: 28
        value: 281
        value: 110
        value: 111
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
        value: 1
      }
    }
  }
}

features {
  feature {
    key: "NL_num_nbrs"
    value {
      int64_list {
        value: 0
      }
    }
  }
  feature {
    key: "id"
    value {
      bytes_list {
        value: "2bb5ba04-e450-4eb7-9a65-6ba0c1981c98"
      }
    }
  }
  feature {
    key: "label_xf"
    value {
      int64_list {
        value: 1
      }
    }
  }
  feature {
    key: "text_xf"
    value {
      int64_list {
        value: 13
        value: 8
        value: 6
        value: 2
        value: 18
        value: 69
        value: 140
        value: 27
        value: 83
        value: 31
        value: 1877
        value: 905
        value: 9
        value: 10057
        value: 31
        value: 43
        value: 2115
        value: 36
        value: 32
        value: 2057
        value: 6133
        value: 10
        value: 6
        value: 32
        value: 2474
        value: 1614
        value: 3
        value: 2707
        value: 990
        value: 4
        value: 10067
        value: 9
        value: 2
        value: 1532
        value: 242
        value: 90
        value: 3757
        value: 3
        value: 90
        value: 10026
        value: 0
        value: 242
        value: 6
        value: 260
        value: 31
        value: 24
        value: 4
        value: 0
        value: 84
        value: 497
        value: 177
        value: 1151
        value: 777
        value: 9
        value: 397
        value: 552
        value: 7726
        value: 10051
        value: 34
        value: 14
        value: 379
        value: 33
        value: 1829
        value: 9
        value: 123
        value: 0
        value: 916
        value: 10028
        value: 7
        value: 64
        value: 571
        value: 12
        value: 8
        value: 18
        value: 27
        value: 687
        value: 9
        value: 30
        value: 5609
        value: 16
        value: 25
        value: 99
        value: 117
        value: 66
        value: 2
        value: 130
        value: 21
        value: 8
        value: 842
        value: 7726
        value: 10051
        value: 6
        value: 338
        value: 1107
        value: 3
        value: 24
        value: 10020
        value: 29
        value: 53
        value: 1476
      }
    }
  }
}

The Trainer Component

The Trainer component trains models using TensorFlow.

Create a Python module containing a trainer_fn function, which must return an estimator. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().

# Setup paths.
_trainer_module_file = 'imdb_trainer.py'
%%writefile {_trainer_module_file}

import neural_structured_learning as nsl

import tensorflow as tf

import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils


NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
LABEL_KEY = 'label'
ID_FEATURE_KEY = 'id'

def _transformed_name(key):
  return key + '_xf'


def _transformed_names(keys):
  return [_transformed_name(key) for key in keys]


# Hyperparameters:
#
# We will use an instance of `HParams` to inclue various hyperparameters and
# constants used for training and evaluation. We briefly describe each of them
# below:
#
# -   max_seq_length: This is the maximum number of words considered from each
#                     movie review in this example.
# -   vocab_size: This is the size of the vocabulary considered for this
#                 example.
# -   oov_size: This is the out-of-vocabulary size considered for this example.
# -   distance_type: This is the distance metric used to regularize the sample
#                    with its neighbors.
# -   graph_regularization_multiplier: This controls the relative weight of the
#                                      graph regularization term in the overall
#                                      loss function.
# -   num_neighbors: The number of neighbors used for graph regularization. This
#                    value has to be less than or equal to the `num_neighbors`
#                    argument used above in the GraphAugmentation component when
#                    invoking `nsl.tools.pack_nbrs`.
# -   num_fc_units: The number of units in the fully connected layer of the
#                   neural network.
class HParams(object):
  """Hyperparameters used for training."""
  def __init__(self):
    ### dataset parameters
    # The following 3 values should match those defined in the Transform
    # Component.
    self.max_seq_length = 100
    self.vocab_size = 10000
    self.oov_size = 100
    ### Neural Graph Learning parameters
    self.distance_type = nsl.configs.DistanceType.L2
    self.graph_regularization_multiplier = 0.1
    # The following value has to be at most the value of 'num_neighbors' used
    # in the GraphAugmentation component.
    self.num_neighbors = 1
    ### Model Architecture
    self.num_embedding_dims = 16
    self.num_fc_units = 64

HPARAMS = HParams()


def optimizer_fn():
  """Returns an instance of `tf.Optimizer`."""
  return tf.compat.v1.train.RMSPropOptimizer(
    learning_rate=0.0001, decay=1e-6)


def build_train_op(loss, global_step):
  """Builds a train op to optimize the given loss using gradient descent."""
  with tf.name_scope('train'):
    optimizer = optimizer_fn()
    train_op = optimizer.minimize(loss=loss, global_step=global_step)
  return train_op


# Building the model:
#
# A neural network is created by stacking layers—this requires two main
# architectural decisions:
# * How many layers to use in the model?
# * How many *hidden units* to use for each layer?
#
# In this example, the input data consists of an array of word-indices. The
# labels to predict are either 0 or 1. We will use a feed-forward neural network
# as our base model in this tutorial.
def feed_forward_model(features, is_training, reuse=tf.compat.v1.AUTO_REUSE):
  """Builds a simple 2 layer feed forward neural network.

  The layers are effectively stacked sequentially to build the classifier. The
  first layer is an Embedding layer, which takes the integer-encoded vocabulary
  and looks up the embedding vector for each word-index. These vectors are
  learned as the model trains. The vectors add a dimension to the output array.
  The resulting dimensions are: (batch, sequence, embedding). Next is a global
  average pooling 1D layer, which reduces the dimensionality of its inputs from
  3D to 2D. This fixed-length output vector is piped through a fully-connected
  (Dense) layer with 16 hidden units. The last layer is densely connected with a
  single output node. Using the sigmoid activation function, this value is a
  float between 0 and 1, representing a probability, or confidence level.

  Args:
    features: A dictionary containing batch features returned from the
      `input_fn`, that include sample features, corresponding neighbor features,
      and neighbor weights.
    is_training: a Python Boolean value or a Boolean scalar Tensor, indicating
      whether to apply dropout.
    reuse: a Python Boolean value for reusing variable scope.

  Returns:
    logits: Tensor of shape [batch_size, 1].
    representations: Tensor of shape [batch_size, _] for graph regularization.
      This is the representation of each example at the graph regularization
      layer.
  """

  with tf.compat.v1.variable_scope('ff', reuse=reuse):
    inputs = features[_transformed_name('text')]
    embeddings = tf.compat.v1.get_variable(
        'embeddings',
        shape=[
            HPARAMS.vocab_size + HPARAMS.oov_size, HPARAMS.num_embedding_dims
        ])
    embedding_layer = tf.nn.embedding_lookup(embeddings, inputs)

    pooling_layer = tf.compat.v1.layers.AveragePooling1D(
        pool_size=HPARAMS.max_seq_length, strides=HPARAMS.max_seq_length)(
            embedding_layer)
    # Shape of pooling_layer is now [batch_size, 1, HPARAMS.num_embedding_dims]
    pooling_layer = tf.reshape(pooling_layer, [-1, HPARAMS.num_embedding_dims])

    dense_layer = tf.compat.v1.layers.Dense(
        16, activation='relu')(
            pooling_layer)

    output_layer = tf.compat.v1.layers.Dense(
        1, activation='sigmoid')(
            dense_layer)

    # Graph regularization will be done on the penultimate (dense) layer
    # because the output layer is a single floating point number.
    return output_layer, dense_layer


# A note on hidden units:
#
# The above model has two intermediate or "hidden" layers, between the input and
# output, and excluding the Embedding layer. The number of outputs (units,
# nodes, or neurons) is the dimension of the representational space for the
# layer. In other words, the amount of freedom the network is allowed when
# learning an internal representation. If a model has more hidden units
# (a higher-dimensional representation space), and/or more layers, then the
# network can learn more complex representations. However, it makes the network
# more computationally expensive and may lead to learning unwanted
# patterns—patterns that improve performance on training data but not on the
# test data. This is called overfitting.


# This function will be used to generate the embeddings for samples and their
# corresponding neighbors, which will then be used for graph regularization.
def embedding_fn(features, mode, **params):
  """Returns the embedding corresponding to the given features.

  Args:
    features: A dictionary containing batch features returned from the
      `input_fn`, that include sample features, corresponding neighbor features,
      and neighbor weights.
    mode: Specifies if this is training, evaluation, or prediction. See
      tf.estimator.ModeKeys.

  Returns:
    The embedding that will be used for graph regularization.
  """
  is_training = (mode == tf.estimator.ModeKeys.TRAIN)
  _, embedding = feed_forward_model(features, is_training)
  return embedding


def feed_forward_model_fn(features, labels, mode, params, config):
  """Implementation of the model_fn for the base feed-forward model.

  Args:
    features: This is the first item returned from the `input_fn` passed to
      `train`, `evaluate`, and `predict`. This should be a single `Tensor` or
      `dict` of same.
    labels: This is the second item returned from the `input_fn` passed to
      `train`, `evaluate`, and `predict`. This should be a single `Tensor` or
      `dict` of same (for multi-head models). If mode is `ModeKeys.PREDICT`,
      `labels=None` will be passed. If the `model_fn`'s signature does not
      accept `mode`, the `model_fn` must still be able to handle `labels=None`.
    mode: Optional. Specifies if this training, evaluation or prediction. See
      `ModeKeys`.
    params: An HParams instance as returned by get_hyper_parameters().
    config: Optional configuration object. Will receive what is passed to
      Estimator in `config` parameter, or the default `config`. Allows updating
      things in your model_fn based on configuration such as `num_ps_replicas`,
      or `model_dir`. Unused currently.

  Returns:
     A `tf.estimator.EstimatorSpec` for the base feed-forward model. This does
     not include graph-based regularization.
  """

  is_training = mode == tf.estimator.ModeKeys.TRAIN

  # Build the computation graph.
  probabilities, _ = feed_forward_model(features, is_training)
  predictions = tf.round(probabilities)

  if mode == tf.estimator.ModeKeys.PREDICT:
    # labels will be None, and no loss to compute.
    cross_entropy_loss = None
    eval_metric_ops = None
  else:
    # Loss is required in train and eval modes.
    # Flatten 'probabilities' to 1-D.
    probabilities = tf.reshape(probabilities, shape=[-1])
    cross_entropy_loss = tf.compat.v1.keras.losses.binary_crossentropy(
        labels, probabilities)
    eval_metric_ops = {
        'accuracy': tf.compat.v1.metrics.accuracy(labels, predictions)
    }

  if is_training:
    global_step = tf.compat.v1.train.get_or_create_global_step()
    train_op = build_train_op(cross_entropy_loss, global_step)
  else:
    train_op = None

  return tf.estimator.EstimatorSpec(
      mode=mode,
      predictions={
          'probabilities': probabilities,
          'predictions': predictions
      },
      loss=cross_entropy_loss,
      train_op=train_op,
      eval_metric_ops=eval_metric_ops)


# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
  return schema_utils.schema_as_feature_spec(schema).feature_spec


def _gzip_reader_fn(filenames):
  """Small utility returning a record reader that can read gzip'ed files."""
  return tf.data.TFRecordDataset(
      filenames,
      compression_type='GZIP')


def _example_serving_receiver_fn(tf_transform_output, schema):
  """Build the serving in inputs.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    Tensorflow graph which parses examples, applying tf-transform to them.
  """
  raw_feature_spec = _get_raw_feature_spec(schema)
  raw_feature_spec.pop(LABEL_KEY)

  # We don't need the ID feature for serving.
  raw_feature_spec.pop(ID_FEATURE_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec, default_batch_size=None)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)

  # Even though, LABEL_KEY was removed from 'raw_feature_spec', the transform
  # operation would have injected the transformed LABEL_KEY feature with a
  # default value.
  transformed_features.pop(_transformed_name(LABEL_KEY))
  return tf.estimator.export.ServingInputReceiver(
      transformed_features, serving_input_receiver.receiver_tensors)


def _eval_input_receiver_fn(tf_transform_output, schema):
  """Build everything needed for the tf-model-analysis to run the model.

  Args:
    tf_transform_output: A TFTransformOutput.
    schema: the schema of the input data.

  Returns:
    EvalInputReceiver function, which contains:
      - Tensorflow graph which parses raw untransformed features, applies the
        tf-transform preprocessing operators.
      - Set of raw, untransformed features.
      - Label against which predictions will be compared.
  """
  # Notice that the inputs are raw features, not transformed features here.
  raw_feature_spec = _get_raw_feature_spec(schema)

  # We don't need the ID feature for TFMA.
  raw_feature_spec.pop(ID_FEATURE_KEY)

  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
      raw_feature_spec, default_batch_size=None)
  serving_input_receiver = raw_input_fn()

  transformed_features = tf_transform_output.transform_raw_features(
      serving_input_receiver.features)

  labels = transformed_features.pop(_transformed_name(LABEL_KEY))
  return tfma.export.EvalInputReceiver(
      features=transformed_features,
      receiver_tensors=serving_input_receiver.receiver_tensors,
      labels=labels)


def _augment_feature_spec(feature_spec, num_neighbors):
  """Augments `feature_spec` to include neighbor features.
    Args:
      feature_spec: Dictionary of feature keys mapping to TF feature types.
      num_neighbors: Number of neighbors to use for feature key augmentation.
    Returns:
      An augmented `feature_spec` that includes neighbor feature keys.
  """
  for i in range(num_neighbors):
    feature_spec['{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'id')] = \
        tf.io.VarLenFeature(dtype=tf.string)
    # We don't care about the neighbor features corresponding to
    # _transformed_name(LABEL_KEY) because the LABEL_KEY feature will be
    # removed from the feature spec during training/evaluation.
    feature_spec['{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'text_xf')] = \
        tf.io.FixedLenFeature(shape=[HPARAMS.max_seq_length], dtype=tf.int64,
                              default_value=tf.constant(0, dtype=tf.int64,
                                                        shape=[HPARAMS.max_seq_length]))
    # The 'NL_num_nbrs' features is currently not used.

  # Set the neighbor weight feature keys.
  for i in range(num_neighbors):
    feature_spec['{}{}{}'.format(NBR_FEATURE_PREFIX, i, NBR_WEIGHT_SUFFIX)] = \
        tf.io.FixedLenFeature(shape=[1], dtype=tf.float32, default_value=[0.0])

  return feature_spec


def _input_fn(filenames, tf_transform_output, is_training, batch_size=200):
  """Generates features and labels for training or evaluation.

  Args:
    filenames: [str] list of CSV files to read data from.
    tf_transform_output: A TFTransformOutput.
    is_training: Boolean indicating if we are in training mode.
    batch_size: int First dimension size of the Tensors returned by input_fn

  Returns:
    A (features, indices) tuple where features is a dictionary of
      Tensors, and indices is a single Tensor of label indices.
  """
  transformed_feature_spec = (
      tf_transform_output.transformed_feature_spec().copy())

  # During training, NSL uses augmented training data (which includes features
  # from graph neighbors). So, update the feature spec accordingly. This needs
  # to be done because we are using different schemas for NSL training and eval,
  # but the Trainer Component only accepts a single schema.
  if is_training:
    transformed_feature_spec =_augment_feature_spec(transformed_feature_spec,
                                                    HPARAMS.num_neighbors)

  dataset = tf.data.experimental.make_batched_features_dataset(
      filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)

  transformed_features = tf.compat.v1.data.make_one_shot_iterator(
      dataset).get_next()
  # We pop the label because we do not want to use it as a feature while we're
  # training.
  return transformed_features, transformed_features.pop(
      _transformed_name(LABEL_KEY))


# TFX will call this function
def trainer_fn(hparams, schema):
  """Build the estimator using the high level API.
  Args:
    hparams: Holds hyperparameters used to train the model as name/value pairs.
    schema: Holds the schema of the training examples.
  Returns:
    A dict of the following:
      - estimator: The estimator that will be used for training and eval.
      - train_spec: Spec for training.
      - eval_spec: Spec for eval.
      - eval_input_receiver_fn: Input function for eval.
  """
  train_batch_size = 40
  eval_batch_size = 40

  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)

  train_input_fn = lambda: _input_fn(
      hparams.train_files,
      tf_transform_output,
      is_training=True,
      batch_size=train_batch_size)

  eval_input_fn = lambda: _input_fn(
      hparams.eval_files,
      tf_transform_output,
      is_training=False,
      batch_size=eval_batch_size)

  train_spec = tf.estimator.TrainSpec(
      train_input_fn,
      max_steps=hparams.train_steps)

  serving_receiver_fn = lambda: _example_serving_receiver_fn(
      tf_transform_output, schema)

  exporter = tf.estimator.FinalExporter('imdb', serving_receiver_fn)
  eval_spec = tf.estimator.EvalSpec(
      eval_input_fn,
      steps=hparams.eval_steps,
      exporters=[exporter],
      name='imdb-eval')

  run_config = tf.estimator.RunConfig(
      save_checkpoints_steps=999, keep_checkpoint_max=1)

  run_config = run_config.replace(model_dir=hparams.serving_model_dir)

  estimator = tf.estimator.Estimator(
      model_fn=feed_forward_model_fn, config=run_config, params=HPARAMS)

  # Create a graph regularization config.
  graph_reg_config = nsl.configs.make_graph_reg_config(
      max_neighbors=HPARAMS.num_neighbors,
      multiplier=HPARAMS.graph_regularization_multiplier,
      distance_type=HPARAMS.distance_type,
      sum_over_axis=-1)

  # Invoke the Graph Regularization Estimator wrapper to incorporate
  # graph-based regularization for training.
  graph_nsl_estimator = nsl.estimator.add_graph_regularization(
      estimator,
      embedding_fn,
      optimizer_fn=optimizer_fn,
      graph_reg_config=graph_reg_config)

  # Create an input receiver for TFMA processing
  receiver_fn = lambda: _eval_input_receiver_fn(
      tf_transform_output, schema)

  return {
      'estimator': graph_nsl_estimator,
      'train_spec': train_spec,
      'eval_spec': eval_spec,
      'eval_input_receiver_fn': receiver_fn
  }
Writing imdb_trainer.py

Create and run the Trainer component, passing it the file that we created above.

# Uses user-provided Python function that implements a model using TensorFlow's
# Estimators API.
trainer = Trainer(
    module_file=_trainer_module_file,
    custom_executor_spec=executor_spec.ExecutorClassSpec(
        trainer_executor.Executor),
    transformed_examples=graph_augmentation.outputs['augmented_examples'],
    schema=schema_gen.outputs['schema'],
    transform_graph=transform.outputs['transform_graph'],
    train_args=trainer_pb2.TrainArgs(num_steps=10000),
    eval_args=trainer_pb2.EvalArgs(num_steps=5000))
context.run(trainer)
WARNING:absl:`custom_executor_spec` is deprecated. Please customize component directly.
WARNING:absl:`transformed_examples` is deprecated. Please use `examples` instead.
/tmpfs/src/tf_docs_env/lib/python3.9/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` directly.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
        ********************************************************************************

!!
  self.initialize_options()
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
WARNING:absl:Examples artifact does not have payload_format custom property. Falling back to FORMAT_TF_EXAMPLE
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying imdb_transform.py -> build/lib
copying imdb_trainer.py -> build/lib
installing to /tmpfs/tmp/tmpyffreimu
running install
running install_lib
copying build/lib/imdb_transform.py -> /tmpfs/tmp/tmpyffreimu
copying build/lib/imdb_trainer.py -> /tmpfs/tmp/tmpyffreimu
running install_egg_info
running egg_info
creating tfx_user_code_Trainer.egg-info
writing tfx_user_code_Trainer.egg-info/PKG-INFO
writing dependency_links to tfx_user_code_Trainer.egg-info/dependency_links.txt
writing top-level names to tfx_user_code_Trainer.egg-info/top_level.txt
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
reading manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
writing manifest file 'tfx_user_code_Trainer.egg-info/SOURCES.txt'
Copying tfx_user_code_Trainer.egg-info to /tmpfs/tmp/tmpyffreimu/tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9-py3.9.egg-info
running install_scripts
creating /tmpfs/tmp/tmpyffreimu/tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9.dist-info/WHEEL
creating '/tmpfs/tmp/tmpg85qjhd7/tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9-py3-none-any.whl' and adding '/tmpfs/tmp/tmpyffreimu' to it
adding 'imdb_trainer.py'
adding 'imdb_transform.py'
adding 'tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9.dist-info/METADATA'
adding 'tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9.dist-info/WHEEL'
adding 'tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9.dist-info/top_level.txt'
adding 'tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9.dist-info/RECORD'
removing /tmpfs/tmp/tmpyffreimu
Processing /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/_wheels/tfx_user_code_Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9-py3-none-any.whl
Installing collected packages: tfx-user-code-Trainer
Successfully installed tfx-user-code-Trainer-0.0+02c7b97b194ad02bcc21de6184519fc73d0572a63c6c5af25fcac6f33b1320f9
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:415: TrainSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:415: TrainSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:422: FinalExporter.__init__ (from tensorflow_estimator.python.estimator.exporter) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:422: FinalExporter.__init__ (from tensorflow_estimator.python.estimator.exporter) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:423: EvalSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:423: EvalSpec.__new__ (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:429: RunConfig.__init__ (from tensorflow_estimator.python.estimator.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:429: RunConfig.__init__ (from tensorflow_estimator.python.estimator.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:434: Estimator.__init__ (from tensorflow_estimator.python.estimator.estimator) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:434: Estimator.__init__ (from tensorflow_estimator.python.estimator.estimator) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Using config: {'_model_dir': '/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Using config: {'_model_dir': '/tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 999, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
  }
}
, '_keep_checkpoint_max': 1, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tfx/components/trainer/executor.py:270: train_and_evaluate (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tfx/components/trainer/executor.py:270: train_and_evaluate (from tensorflow_estimator.python.estimator.training) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 999 or save_checkpoints_secs None.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:385: StopAtStepHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:385: StopAtStepHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/readers.py:1086: parse_example_dataset (from tensorflow.python.data.experimental.ops.parsing_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map(tf.io.parse_example(...))` instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/data/experimental/ops/readers.py:1086: parse_example_dataset (from tensorflow.python.data.experimental.ops.parsing_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map(tf.io.parse_example(...))` instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/rmsprop.py:188: calling Ones.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/rmsprop.py:188: calling Ones.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:234: EstimatorSpec.__new__ (from tensorflow_estimator.python.estimator.model_fn) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:234: EstimatorSpec.__new__ (from tensorflow_estimator.python.estimator.model_fn) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1416: NanTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1416: NanTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1419: LoggingTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1419: LoggingTensorHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/basic_session_run_hooks.py:232: SecondOrStepTimer.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/basic_session_run_hooks.py:232: SecondOrStepTimer.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1456: CheckpointSaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/estimator.py:1456: CheckpointSaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Create CheckpointSaverHook.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:579: StepCounterHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:579: StepCounterHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:586: SummarySaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:586: SummarySaverHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
2023-10-03 09:26:15.155046: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 0...
INFO:tensorflow:Saving checkpoints for 0 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 0 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 0...
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1455: SessionRunArgs.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1455: SessionRunArgs.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1454: SessionRunContext.__init__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1454: SessionRunContext.__init__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1474: SessionRunValues.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/monitored_session.py:1474: SessionRunValues.__new__ (from tensorflow.python.training.session_run_hook) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:loss = 0.6929594, step = 0
INFO:tensorflow:loss = 0.6929594, step = 0
INFO:tensorflow:global_step/sec: 370.467
INFO:tensorflow:global_step/sec: 370.467
INFO:tensorflow:loss = 0.69297975, step = 100 (0.272 sec)
INFO:tensorflow:loss = 0.69297975, step = 100 (0.272 sec)
INFO:tensorflow:global_step/sec: 543.053
INFO:tensorflow:global_step/sec: 543.053
INFO:tensorflow:loss = 0.6928556, step = 200 (0.184 sec)
INFO:tensorflow:loss = 0.6928556, step = 200 (0.184 sec)
INFO:tensorflow:global_step/sec: 533.653
INFO:tensorflow:global_step/sec: 533.653
INFO:tensorflow:loss = 0.69149274, step = 300 (0.187 sec)
INFO:tensorflow:loss = 0.69149274, step = 300 (0.187 sec)
INFO:tensorflow:global_step/sec: 531.715
INFO:tensorflow:global_step/sec: 531.715
INFO:tensorflow:loss = 0.69160056, step = 400 (0.188 sec)
INFO:tensorflow:loss = 0.69160056, step = 400 (0.188 sec)
INFO:tensorflow:global_step/sec: 537.568
INFO:tensorflow:global_step/sec: 537.568
INFO:tensorflow:loss = 0.6899355, step = 500 (0.186 sec)
INFO:tensorflow:loss = 0.6899355, step = 500 (0.186 sec)
INFO:tensorflow:global_step/sec: 533.688
INFO:tensorflow:global_step/sec: 533.688
INFO:tensorflow:loss = 0.68892, step = 600 (0.187 sec)
INFO:tensorflow:loss = 0.68892, step = 600 (0.187 sec)
INFO:tensorflow:global_step/sec: 537.297
INFO:tensorflow:global_step/sec: 537.297
INFO:tensorflow:loss = 0.68980354, step = 700 (0.186 sec)
INFO:tensorflow:loss = 0.68980354, step = 700 (0.186 sec)
INFO:tensorflow:global_step/sec: 540.112
INFO:tensorflow:global_step/sec: 540.112
INFO:tensorflow:loss = 0.68673426, step = 800 (0.185 sec)
INFO:tensorflow:loss = 0.68673426, step = 800 (0.185 sec)
INFO:tensorflow:global_step/sec: 531.991
INFO:tensorflow:global_step/sec: 531.991
INFO:tensorflow:loss = 0.68761533, step = 900 (0.188 sec)
INFO:tensorflow:loss = 0.68761533, step = 900 (0.188 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 999...
INFO:tensorflow:Saving checkpoints for 999 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 999 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/saver.py:1067: remove_checkpoint (from tensorflow.python.checkpoint.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/saver.py:1067: remove_checkpoint (from tensorflow.python.checkpoint.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to delete files with this prefix.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 999...
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2023-10-03T09:26:18
INFO:tensorflow:Starting evaluation at 2023-10-03T09:26:18
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/evaluation.py:260: FinalOpsHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/training/evaluation.py:260: FinalOpsHook.__init__ (from tensorflow.python.training.basic_session_run_hooks) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-999
2023-10-03 09:26:18.459897: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-999
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 3.59425s
INFO:tensorflow:Inference Time : 3.59425s
INFO:tensorflow:Finished evaluation at 2023-10-03-09:26:21
INFO:tensorflow:Finished evaluation at 2023-10-03-09:26:21
INFO:tensorflow:Saving dict for global step 999: accuracy = 0.6945, global_step = 999, loss = 0.6857754
INFO:tensorflow:Saving dict for global step 999: accuracy = 0.6945, global_step = 999, loss = 0.6857754
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-999
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 999: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-999
INFO:tensorflow:global_step/sec: 22.2246
INFO:tensorflow:global_step/sec: 22.2246
INFO:tensorflow:loss = 0.68417, step = 1000 (4.499 sec)
INFO:tensorflow:loss = 0.68417, step = 1000 (4.499 sec)
INFO:tensorflow:global_step/sec: 538.486
INFO:tensorflow:global_step/sec: 538.486
INFO:tensorflow:loss = 0.679789, step = 1100 (0.186 sec)
INFO:tensorflow:loss = 0.679789, step = 1100 (0.186 sec)
INFO:tensorflow:global_step/sec: 536.872
INFO:tensorflow:global_step/sec: 536.872
INFO:tensorflow:loss = 0.682135, step = 1200 (0.186 sec)
INFO:tensorflow:loss = 0.682135, step = 1200 (0.186 sec)
INFO:tensorflow:global_step/sec: 536.734
INFO:tensorflow:global_step/sec: 536.734
INFO:tensorflow:loss = 0.6827911, step = 1300 (0.186 sec)
INFO:tensorflow:loss = 0.6827911, step = 1300 (0.186 sec)
INFO:tensorflow:global_step/sec: 540.148
INFO:tensorflow:global_step/sec: 540.148
INFO:tensorflow:loss = 0.67769843, step = 1400 (0.185 sec)
INFO:tensorflow:loss = 0.67769843, step = 1400 (0.185 sec)
INFO:tensorflow:global_step/sec: 523.65
INFO:tensorflow:global_step/sec: 523.65
INFO:tensorflow:loss = 0.68047726, step = 1500 (0.191 sec)
INFO:tensorflow:loss = 0.68047726, step = 1500 (0.191 sec)
INFO:tensorflow:global_step/sec: 514.706
INFO:tensorflow:global_step/sec: 514.706
INFO:tensorflow:loss = 0.6761591, step = 1600 (0.194 sec)
INFO:tensorflow:loss = 0.6761591, step = 1600 (0.194 sec)
INFO:tensorflow:global_step/sec: 517.518
INFO:tensorflow:global_step/sec: 517.518
INFO:tensorflow:loss = 0.66884255, step = 1700 (0.193 sec)
INFO:tensorflow:loss = 0.66884255, step = 1700 (0.193 sec)
INFO:tensorflow:global_step/sec: 516.205
INFO:tensorflow:global_step/sec: 516.205
INFO:tensorflow:loss = 0.6756726, step = 1800 (0.194 sec)
INFO:tensorflow:loss = 0.6756726, step = 1800 (0.194 sec)
INFO:tensorflow:global_step/sec: 511.831
INFO:tensorflow:global_step/sec: 511.831
INFO:tensorflow:loss = 0.67606705, step = 1900 (0.195 sec)
INFO:tensorflow:loss = 0.67606705, step = 1900 (0.195 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 1998...
INFO:tensorflow:Saving checkpoints for 1998 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 1998 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 1998...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 369.952
INFO:tensorflow:global_step/sec: 369.952
INFO:tensorflow:loss = 0.6597465, step = 2000 (0.270 sec)
INFO:tensorflow:loss = 0.6597465, step = 2000 (0.270 sec)
INFO:tensorflow:global_step/sec: 521.671
INFO:tensorflow:global_step/sec: 521.671
INFO:tensorflow:loss = 0.6411843, step = 2100 (0.192 sec)
INFO:tensorflow:loss = 0.6411843, step = 2100 (0.192 sec)
INFO:tensorflow:global_step/sec: 516.496
INFO:tensorflow:global_step/sec: 516.496
INFO:tensorflow:loss = 0.655653, step = 2200 (0.194 sec)
INFO:tensorflow:loss = 0.655653, step = 2200 (0.194 sec)
INFO:tensorflow:global_step/sec: 517.785
INFO:tensorflow:global_step/sec: 517.785
INFO:tensorflow:loss = 0.6621307, step = 2300 (0.193 sec)
INFO:tensorflow:loss = 0.6621307, step = 2300 (0.193 sec)
INFO:tensorflow:global_step/sec: 516.297
INFO:tensorflow:global_step/sec: 516.297
INFO:tensorflow:loss = 0.6749119, step = 2400 (0.194 sec)
INFO:tensorflow:loss = 0.6749119, step = 2400 (0.194 sec)
INFO:tensorflow:global_step/sec: 511.959
INFO:tensorflow:global_step/sec: 511.959
INFO:tensorflow:loss = 0.6331712, step = 2500 (0.195 sec)
INFO:tensorflow:loss = 0.6331712, step = 2500 (0.195 sec)
INFO:tensorflow:global_step/sec: 519.783
INFO:tensorflow:global_step/sec: 519.783
INFO:tensorflow:loss = 0.6319319, step = 2600 (0.193 sec)
INFO:tensorflow:loss = 0.6319319, step = 2600 (0.193 sec)
INFO:tensorflow:global_step/sec: 517.062
INFO:tensorflow:global_step/sec: 517.062
INFO:tensorflow:loss = 0.6549273, step = 2700 (0.193 sec)
INFO:tensorflow:loss = 0.6549273, step = 2700 (0.193 sec)
INFO:tensorflow:global_step/sec: 508.55
INFO:tensorflow:global_step/sec: 508.55
INFO:tensorflow:loss = 0.65253586, step = 2800 (0.197 sec)
INFO:tensorflow:loss = 0.65253586, step = 2800 (0.197 sec)
INFO:tensorflow:global_step/sec: 512.278
INFO:tensorflow:global_step/sec: 512.278
INFO:tensorflow:loss = 0.63001615, step = 2900 (0.195 sec)
INFO:tensorflow:loss = 0.63001615, step = 2900 (0.195 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 2997...
INFO:tensorflow:Saving checkpoints for 2997 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 2997 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 2997...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 371.22
INFO:tensorflow:global_step/sec: 371.22
INFO:tensorflow:loss = 0.63928545, step = 3000 (0.269 sec)
INFO:tensorflow:loss = 0.63928545, step = 3000 (0.269 sec)
INFO:tensorflow:global_step/sec: 516.706
INFO:tensorflow:global_step/sec: 516.706
INFO:tensorflow:loss = 0.6127278, step = 3100 (0.194 sec)
INFO:tensorflow:loss = 0.6127278, step = 3100 (0.194 sec)
INFO:tensorflow:global_step/sec: 513.946
INFO:tensorflow:global_step/sec: 513.946
INFO:tensorflow:loss = 0.63194007, step = 3200 (0.195 sec)
INFO:tensorflow:loss = 0.63194007, step = 3200 (0.195 sec)
INFO:tensorflow:global_step/sec: 514.666
INFO:tensorflow:global_step/sec: 514.666
INFO:tensorflow:loss = 0.6160414, step = 3300 (0.194 sec)
INFO:tensorflow:loss = 0.6160414, step = 3300 (0.194 sec)
INFO:tensorflow:global_step/sec: 513.369
INFO:tensorflow:global_step/sec: 513.369
INFO:tensorflow:loss = 0.5860639, step = 3400 (0.195 sec)
INFO:tensorflow:loss = 0.5860639, step = 3400 (0.195 sec)
INFO:tensorflow:global_step/sec: 513.528
INFO:tensorflow:global_step/sec: 513.528
INFO:tensorflow:loss = 0.5981359, step = 3500 (0.195 sec)
INFO:tensorflow:loss = 0.5981359, step = 3500 (0.195 sec)
INFO:tensorflow:global_step/sec: 516.849
INFO:tensorflow:global_step/sec: 516.849
INFO:tensorflow:loss = 0.581832, step = 3600 (0.193 sec)
INFO:tensorflow:loss = 0.581832, step = 3600 (0.193 sec)
INFO:tensorflow:global_step/sec: 509.7
INFO:tensorflow:global_step/sec: 509.7
INFO:tensorflow:loss = 0.64935255, step = 3700 (0.196 sec)
INFO:tensorflow:loss = 0.64935255, step = 3700 (0.196 sec)
INFO:tensorflow:global_step/sec: 515.131
INFO:tensorflow:global_step/sec: 515.131
INFO:tensorflow:loss = 0.5611487, step = 3800 (0.194 sec)
INFO:tensorflow:loss = 0.5611487, step = 3800 (0.194 sec)
INFO:tensorflow:global_step/sec: 512.688
INFO:tensorflow:global_step/sec: 512.688
INFO:tensorflow:loss = 0.5940226, step = 3900 (0.195 sec)
INFO:tensorflow:loss = 0.5940226, step = 3900 (0.195 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 3996...
INFO:tensorflow:Saving checkpoints for 3996 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 3996 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 3996...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 370.593
INFO:tensorflow:global_step/sec: 370.593
INFO:tensorflow:loss = 0.5423057, step = 4000 (0.270 sec)
INFO:tensorflow:loss = 0.5423057, step = 4000 (0.270 sec)
INFO:tensorflow:global_step/sec: 518.983
INFO:tensorflow:global_step/sec: 518.983
INFO:tensorflow:loss = 0.5866491, step = 4100 (0.193 sec)
INFO:tensorflow:loss = 0.5866491, step = 4100 (0.193 sec)
INFO:tensorflow:global_step/sec: 517.79
INFO:tensorflow:global_step/sec: 517.79
INFO:tensorflow:loss = 0.56725276, step = 4200 (0.193 sec)
INFO:tensorflow:loss = 0.56725276, step = 4200 (0.193 sec)
INFO:tensorflow:global_step/sec: 512.509
INFO:tensorflow:global_step/sec: 512.509
INFO:tensorflow:loss = 0.5773095, step = 4300 (0.195 sec)
INFO:tensorflow:loss = 0.5773095, step = 4300 (0.195 sec)
INFO:tensorflow:global_step/sec: 513.759
INFO:tensorflow:global_step/sec: 513.759
INFO:tensorflow:loss = 0.5879232, step = 4400 (0.195 sec)
INFO:tensorflow:loss = 0.5879232, step = 4400 (0.195 sec)
INFO:tensorflow:global_step/sec: 521.342
INFO:tensorflow:global_step/sec: 521.342
INFO:tensorflow:loss = 0.5910369, step = 4500 (0.192 sec)
INFO:tensorflow:loss = 0.5910369, step = 4500 (0.192 sec)
INFO:tensorflow:global_step/sec: 516.112
INFO:tensorflow:global_step/sec: 516.112
INFO:tensorflow:loss = 0.5953424, step = 4600 (0.194 sec)
INFO:tensorflow:loss = 0.5953424, step = 4600 (0.194 sec)
INFO:tensorflow:global_step/sec: 507.621
INFO:tensorflow:global_step/sec: 507.621
INFO:tensorflow:loss = 0.59464574, step = 4700 (0.197 sec)
INFO:tensorflow:loss = 0.59464574, step = 4700 (0.197 sec)
INFO:tensorflow:global_step/sec: 515.965
INFO:tensorflow:global_step/sec: 515.965
INFO:tensorflow:loss = 0.4649554, step = 4800 (0.194 sec)
INFO:tensorflow:loss = 0.4649554, step = 4800 (0.194 sec)
INFO:tensorflow:global_step/sec: 508.783
INFO:tensorflow:global_step/sec: 508.783
INFO:tensorflow:loss = 0.5143791, step = 4900 (0.196 sec)
INFO:tensorflow:loss = 0.5143791, step = 4900 (0.196 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 4995...
INFO:tensorflow:Saving checkpoints for 4995 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 4995 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 4995...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 365.612
INFO:tensorflow:global_step/sec: 365.612
INFO:tensorflow:loss = 0.54242855, step = 5000 (0.273 sec)
INFO:tensorflow:loss = 0.54242855, step = 5000 (0.273 sec)
INFO:tensorflow:global_step/sec: 513.658
INFO:tensorflow:global_step/sec: 513.658
INFO:tensorflow:loss = 0.4834629, step = 5100 (0.195 sec)
INFO:tensorflow:loss = 0.4834629, step = 5100 (0.195 sec)
INFO:tensorflow:global_step/sec: 513.881
INFO:tensorflow:global_step/sec: 513.881
INFO:tensorflow:loss = 0.54823285, step = 5200 (0.195 sec)
INFO:tensorflow:loss = 0.54823285, step = 5200 (0.195 sec)
INFO:tensorflow:global_step/sec: 515.229
INFO:tensorflow:global_step/sec: 515.229
INFO:tensorflow:loss = 0.46824694, step = 5300 (0.194 sec)
INFO:tensorflow:loss = 0.46824694, step = 5300 (0.194 sec)
INFO:tensorflow:global_step/sec: 515.501
INFO:tensorflow:global_step/sec: 515.501
INFO:tensorflow:loss = 0.46970585, step = 5400 (0.194 sec)
INFO:tensorflow:loss = 0.46970585, step = 5400 (0.194 sec)
INFO:tensorflow:global_step/sec: 516.658
INFO:tensorflow:global_step/sec: 516.658
INFO:tensorflow:loss = 0.43227288, step = 5500 (0.193 sec)
INFO:tensorflow:loss = 0.43227288, step = 5500 (0.193 sec)
INFO:tensorflow:global_step/sec: 516.442
INFO:tensorflow:global_step/sec: 516.442
INFO:tensorflow:loss = 0.5547464, step = 5600 (0.194 sec)
INFO:tensorflow:loss = 0.5547464, step = 5600 (0.194 sec)
INFO:tensorflow:global_step/sec: 509.793
INFO:tensorflow:global_step/sec: 509.793
INFO:tensorflow:loss = 0.49726588, step = 5700 (0.196 sec)
INFO:tensorflow:loss = 0.49726588, step = 5700 (0.196 sec)
INFO:tensorflow:global_step/sec: 513.164
INFO:tensorflow:global_step/sec: 513.164
INFO:tensorflow:loss = 0.51757884, step = 5800 (0.195 sec)
INFO:tensorflow:loss = 0.51757884, step = 5800 (0.195 sec)
INFO:tensorflow:global_step/sec: 514.376
INFO:tensorflow:global_step/sec: 514.376
INFO:tensorflow:loss = 0.52039796, step = 5900 (0.194 sec)
INFO:tensorflow:loss = 0.52039796, step = 5900 (0.194 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 5994...
INFO:tensorflow:Saving checkpoints for 5994 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 5994 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 5994...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 365.081
INFO:tensorflow:global_step/sec: 365.081
INFO:tensorflow:loss = 0.5557078, step = 6000 (0.274 sec)
INFO:tensorflow:loss = 0.5557078, step = 6000 (0.274 sec)
INFO:tensorflow:global_step/sec: 502.385
INFO:tensorflow:global_step/sec: 502.385
INFO:tensorflow:loss = 0.5062954, step = 6100 (0.199 sec)
INFO:tensorflow:loss = 0.5062954, step = 6100 (0.199 sec)
INFO:tensorflow:global_step/sec: 512.63
INFO:tensorflow:global_step/sec: 512.63
INFO:tensorflow:loss = 0.43108582, step = 6200 (0.195 sec)
INFO:tensorflow:loss = 0.43108582, step = 6200 (0.195 sec)
INFO:tensorflow:global_step/sec: 512.216
INFO:tensorflow:global_step/sec: 512.216
INFO:tensorflow:loss = 0.4391087, step = 6300 (0.195 sec)
INFO:tensorflow:loss = 0.4391087, step = 6300 (0.195 sec)
INFO:tensorflow:global_step/sec: 506.191
INFO:tensorflow:global_step/sec: 506.191
INFO:tensorflow:loss = 0.555188, step = 6400 (0.198 sec)
INFO:tensorflow:loss = 0.555188, step = 6400 (0.198 sec)
INFO:tensorflow:global_step/sec: 509.642
INFO:tensorflow:global_step/sec: 509.642
INFO:tensorflow:loss = 0.49000007, step = 6500 (0.196 sec)
INFO:tensorflow:loss = 0.49000007, step = 6500 (0.196 sec)
INFO:tensorflow:global_step/sec: 515.116
INFO:tensorflow:global_step/sec: 515.116
INFO:tensorflow:loss = 0.5439405, step = 6600 (0.194 sec)
INFO:tensorflow:loss = 0.5439405, step = 6600 (0.194 sec)
INFO:tensorflow:global_step/sec: 511.379
INFO:tensorflow:global_step/sec: 511.379
INFO:tensorflow:loss = 0.40515867, step = 6700 (0.195 sec)
INFO:tensorflow:loss = 0.40515867, step = 6700 (0.195 sec)
INFO:tensorflow:global_step/sec: 516.46
INFO:tensorflow:global_step/sec: 516.46
INFO:tensorflow:loss = 0.44222274, step = 6800 (0.194 sec)
INFO:tensorflow:loss = 0.44222274, step = 6800 (0.194 sec)
INFO:tensorflow:global_step/sec: 507.775
INFO:tensorflow:global_step/sec: 507.775
INFO:tensorflow:loss = 0.53019446, step = 6900 (0.197 sec)
INFO:tensorflow:loss = 0.53019446, step = 6900 (0.197 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 6993...
INFO:tensorflow:Saving checkpoints for 6993 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 6993 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 6993...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 371.679
INFO:tensorflow:global_step/sec: 371.679
INFO:tensorflow:loss = 0.4414572, step = 7000 (0.269 sec)
INFO:tensorflow:loss = 0.4414572, step = 7000 (0.269 sec)
INFO:tensorflow:global_step/sec: 517.225
INFO:tensorflow:global_step/sec: 517.225
INFO:tensorflow:loss = 0.42255548, step = 7100 (0.194 sec)
INFO:tensorflow:loss = 0.42255548, step = 7100 (0.194 sec)
INFO:tensorflow:global_step/sec: 514.806
INFO:tensorflow:global_step/sec: 514.806
INFO:tensorflow:loss = 0.493708, step = 7200 (0.194 sec)
INFO:tensorflow:loss = 0.493708, step = 7200 (0.194 sec)
INFO:tensorflow:global_step/sec: 515.054
INFO:tensorflow:global_step/sec: 515.054
INFO:tensorflow:loss = 0.45102578, step = 7300 (0.194 sec)
INFO:tensorflow:loss = 0.45102578, step = 7300 (0.194 sec)
INFO:tensorflow:global_step/sec: 520.556
INFO:tensorflow:global_step/sec: 520.556
INFO:tensorflow:loss = 0.4407784, step = 7400 (0.192 sec)
INFO:tensorflow:loss = 0.4407784, step = 7400 (0.192 sec)
INFO:tensorflow:global_step/sec: 516.393
INFO:tensorflow:global_step/sec: 516.393
INFO:tensorflow:loss = 0.5125228, step = 7500 (0.194 sec)
INFO:tensorflow:loss = 0.5125228, step = 7500 (0.194 sec)
INFO:tensorflow:global_step/sec: 516.193
INFO:tensorflow:global_step/sec: 516.193
INFO:tensorflow:loss = 0.50703216, step = 7600 (0.194 sec)
INFO:tensorflow:loss = 0.50703216, step = 7600 (0.194 sec)
INFO:tensorflow:global_step/sec: 513.608
INFO:tensorflow:global_step/sec: 513.608
INFO:tensorflow:loss = 0.4891678, step = 7700 (0.195 sec)
INFO:tensorflow:loss = 0.4891678, step = 7700 (0.195 sec)
INFO:tensorflow:global_step/sec: 511.544
INFO:tensorflow:global_step/sec: 511.544
INFO:tensorflow:loss = 0.38325772, step = 7800 (0.195 sec)
INFO:tensorflow:loss = 0.38325772, step = 7800 (0.195 sec)
INFO:tensorflow:global_step/sec: 514.977
INFO:tensorflow:global_step/sec: 514.977
INFO:tensorflow:loss = 0.550131, step = 7900 (0.194 sec)
INFO:tensorflow:loss = 0.550131, step = 7900 (0.194 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 7992...
INFO:tensorflow:Saving checkpoints for 7992 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 7992 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 7992...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 369.68
INFO:tensorflow:global_step/sec: 369.68
INFO:tensorflow:loss = 0.45169306, step = 8000 (0.270 sec)
INFO:tensorflow:loss = 0.45169306, step = 8000 (0.270 sec)
INFO:tensorflow:global_step/sec: 508.216
INFO:tensorflow:global_step/sec: 508.216
INFO:tensorflow:loss = 0.5083507, step = 8100 (0.197 sec)
INFO:tensorflow:loss = 0.5083507, step = 8100 (0.197 sec)
INFO:tensorflow:global_step/sec: 515.288
INFO:tensorflow:global_step/sec: 515.288
INFO:tensorflow:loss = 0.4475615, step = 8200 (0.194 sec)
INFO:tensorflow:loss = 0.4475615, step = 8200 (0.194 sec)
INFO:tensorflow:global_step/sec: 511.032
INFO:tensorflow:global_step/sec: 511.032
INFO:tensorflow:loss = 0.35863724, step = 8300 (0.196 sec)
INFO:tensorflow:loss = 0.35863724, step = 8300 (0.196 sec)
INFO:tensorflow:global_step/sec: 513.443
INFO:tensorflow:global_step/sec: 513.443
INFO:tensorflow:loss = 0.39959174, step = 8400 (0.195 sec)
INFO:tensorflow:loss = 0.39959174, step = 8400 (0.195 sec)
INFO:tensorflow:global_step/sec: 516.825
INFO:tensorflow:global_step/sec: 516.825
INFO:tensorflow:loss = 0.37477982, step = 8500 (0.193 sec)
INFO:tensorflow:loss = 0.37477982, step = 8500 (0.193 sec)
INFO:tensorflow:global_step/sec: 520.985
INFO:tensorflow:global_step/sec: 520.985
INFO:tensorflow:loss = 0.41009447, step = 8600 (0.192 sec)
INFO:tensorflow:loss = 0.41009447, step = 8600 (0.192 sec)
INFO:tensorflow:global_step/sec: 508.69
INFO:tensorflow:global_step/sec: 508.69
INFO:tensorflow:loss = 0.43180743, step = 8700 (0.197 sec)
INFO:tensorflow:loss = 0.43180743, step = 8700 (0.197 sec)
INFO:tensorflow:global_step/sec: 511.298
INFO:tensorflow:global_step/sec: 511.298
INFO:tensorflow:loss = 0.39931604, step = 8800 (0.196 sec)
INFO:tensorflow:loss = 0.39931604, step = 8800 (0.196 sec)
INFO:tensorflow:global_step/sec: 517.493
INFO:tensorflow:global_step/sec: 517.493
INFO:tensorflow:loss = 0.4759156, step = 8900 (0.193 sec)
INFO:tensorflow:loss = 0.4759156, step = 8900 (0.193 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 8991...
INFO:tensorflow:Saving checkpoints for 8991 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 8991 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 8991...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:global_step/sec: 375.339
INFO:tensorflow:global_step/sec: 375.339
INFO:tensorflow:loss = 0.3964095, step = 9000 (0.266 sec)
INFO:tensorflow:loss = 0.3964095, step = 9000 (0.266 sec)
INFO:tensorflow:global_step/sec: 515.899
INFO:tensorflow:global_step/sec: 515.899
INFO:tensorflow:loss = 0.50794685, step = 9100 (0.194 sec)
INFO:tensorflow:loss = 0.50794685, step = 9100 (0.194 sec)
INFO:tensorflow:global_step/sec: 521.18
INFO:tensorflow:global_step/sec: 521.18
INFO:tensorflow:loss = 0.4681458, step = 9200 (0.192 sec)
INFO:tensorflow:loss = 0.4681458, step = 9200 (0.192 sec)
INFO:tensorflow:global_step/sec: 514.326
INFO:tensorflow:global_step/sec: 514.326
INFO:tensorflow:loss = 0.4690592, step = 9300 (0.194 sec)
INFO:tensorflow:loss = 0.4690592, step = 9300 (0.194 sec)
INFO:tensorflow:global_step/sec: 514.21
INFO:tensorflow:global_step/sec: 514.21
INFO:tensorflow:loss = 0.46335006, step = 9400 (0.195 sec)
INFO:tensorflow:loss = 0.46335006, step = 9400 (0.195 sec)
INFO:tensorflow:global_step/sec: 512.7
INFO:tensorflow:global_step/sec: 512.7
INFO:tensorflow:loss = 0.46347713, step = 9500 (0.195 sec)
INFO:tensorflow:loss = 0.46347713, step = 9500 (0.195 sec)
INFO:tensorflow:global_step/sec: 514.065
INFO:tensorflow:global_step/sec: 514.065
INFO:tensorflow:loss = 0.46960115, step = 9600 (0.194 sec)
INFO:tensorflow:loss = 0.46960115, step = 9600 (0.194 sec)
INFO:tensorflow:global_step/sec: 517.442
INFO:tensorflow:global_step/sec: 517.442
INFO:tensorflow:loss = 0.38658124, step = 9700 (0.193 sec)
INFO:tensorflow:loss = 0.38658124, step = 9700 (0.193 sec)
INFO:tensorflow:global_step/sec: 518.309
INFO:tensorflow:global_step/sec: 518.309
INFO:tensorflow:loss = 0.38096815, step = 9800 (0.193 sec)
INFO:tensorflow:loss = 0.38096815, step = 9800 (0.193 sec)
INFO:tensorflow:global_step/sec: 517.587
INFO:tensorflow:global_step/sec: 517.587
INFO:tensorflow:loss = 0.42407984, step = 9900 (0.193 sec)
INFO:tensorflow:loss = 0.42407984, step = 9900 (0.193 sec)
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 9990...
INFO:tensorflow:Saving checkpoints for 9990 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 9990 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 9990...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners before saving checkpoint 10000...
INFO:tensorflow:Saving checkpoints for 10000 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Saving checkpoints for 10000 into /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt.
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Calling checkpoint listeners after saving checkpoint 10000...
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Skip the current checkpoint eval due to throttle secs (600 secs).
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2023-10-03T09:26:40
INFO:tensorflow:Starting evaluation at 2023-10-03T09:26:40
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
2023-10-03 09:26:40.463675: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [500/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1000/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [1500/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2000/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [2500/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3000/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [3500/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4000/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [4500/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Evaluation [5000/5000]
INFO:tensorflow:Inference Time : 3.66434s
INFO:tensorflow:Inference Time : 3.66434s
INFO:tensorflow:Finished evaluation at 2023-10-03-09:26:44
INFO:tensorflow:Finished evaluation at 2023-10-03-09:26:44
INFO:tensorflow:Saving dict for global step 10000: accuracy = 0.8006, global_step = 10000, loss = 0.4411501
INFO:tensorflow:Saving dict for global step 10000: accuracy = 0.8006, global_step = 10000, loss = 0.4411501
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
INFO:tensorflow:Saving 'checkpoint_path' summary for global step 10000: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
INFO:tensorflow:Performing the final export in the end of training.
INFO:tensorflow:Performing the final export in the end of training.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:273: build_parsing_serving_input_receiver_fn (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/temp/docs/tutorials/tfx/imdb_trainer.py:273: build_parsing_serving_input_receiver_fn (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/export/export.py:312: ServingInputReceiver.__new__ (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/export/export.py:312: ServingInputReceiver.__new__ (from tensorflow_estimator.python.estimator.export.export) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_text is not available.
WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.
WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:366: PredictOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:366: PredictOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:203: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/signature_def_utils_impl.py:203: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:84: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:84: get_tensor_from_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This API was designed for TensorFlow v1. See https://www.tensorflow.org/guide/migrate for instructions on how to migrate your code to TensorFlow v2.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
2023-10-03 09:26:45.232359: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/export/imdb/temp-1696325204/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/export/imdb/temp-1696325204/assets
INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/export/imdb/temp-1696325204/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/export/imdb/temp-1696325204/saved_model.pb
INFO:tensorflow:Loss for final step: 0.4337769.
INFO:tensorflow:Loss for final step: 0.4337769.
WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.
WARNING:tensorflow:Loading a TF2 SavedModel but eager mode seems disabled.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:struct2tensor is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_decision_forests is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:tensorflow_text is not available.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Done calling model_fn.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:346: _SupervisedOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/saved_model/model_utils/export_utils.py:346: _SupervisedOutput.__init__ (from tensorflow.python.saved_model.model_utils.export_output) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras instead.
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
INFO:tensorflow:Signatures INCLUDED in export for Eval: ['eval']
WARNING:tensorflow:Export includes no default signature!
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
2023-10-03 09:26:45.605025: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
INFO:tensorflow:Restoring parameters from /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-Serving/model.ckpt-10000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-TFMA/temp-1696325205/assets
INFO:tensorflow:Assets written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-TFMA/temp-1696325205/assets
INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-TFMA/temp-1696325205/saved_model.pb
INFO:tensorflow:SavedModel written to: /tmpfs/tmp/tfx-interactive-2023-10-03T09_23_44.639456-dt72o23h/Trainer/model_run/9/Format-TFMA/temp-1696325205/saved_model.pb
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/serving_model_dir/saved_model.pb"
WARNING:absl:Support for estimator-based executor and model export will be deprecated soon. Please use export structure <ModelExportPath>/eval_model_dir/saved_model.pb"

Take a peek at the trained model which was exported from Trainer.

train_uri = trainer.outputs['model'].get()[0].uri
serving_model_path = os.path.join(train_uri, 'Format-Serving')
exported_model = tf.saved_model.load(serving_model_path)
exported_model.graph.get_operations()[:10] + ["..."]
[<tf.Operation 'global_step/Initializer/zeros' type=Const>,
 <tf.Operation 'global_step' type=VarHandleOp>,
 <tf.Operation 'global_step/IsInitialized/VarIsInitializedOp' type=VarIsInitializedOp>,
 <tf.Operation 'global_step/Assign' type=AssignVariableOp>,
 <tf.Operation 'global_step/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'input_example_tensor' type=Placeholder>,
 <tf.Operation 'ParseExample/ParseExampleV2/names' type=Const>,
 <tf.Operation 'ParseExample/ParseExampleV2/sparse_keys' type=Const>,
 <tf.Operation 'ParseExample/ParseExampleV2/dense_keys' type=Const>,
 <tf.Operation 'ParseExample/ParseExampleV2/ragged_keys' type=Const>,
 '...']

Let's visualize the model's metrics using Tensorboard.

#docs_infra: no_execute

# Get the URI of the output artifact representing the training logs,
# which is a directory
model_run_dir = trainer.outputs['model_run'].get()[0].uri

%load_ext tensorboard
%tensorboard --logdir {model_run_dir}

Model Serving

Graph regularization only affects the training workflow by adding a regularization term to the loss function. As a result, the model evaluation and serving workflows remain unchanged. It is for the same reason that we've also omitted downstream TFX components that typically come after the Trainer component like the Evaluator, Pusher, etc.

Conclusion

We have demonstrated the use of graph regularization using the Neural Structured Learning (NSL) framework in a TFX pipeline even when the input does not contain an explicit graph. We considered the task of sentiment classification of IMDB movie reviews for which we synthesized a similarity graph based on review embeddings. We encourage users to experiment further by using different embeddings for graph construction, varying hyperparameters, changing the amount of supervision, and by defining different model architectures.