![]() |
A TFX component to ingest examples from a file system.
Inherits From: BaseComponent
, BaseNode
tfx.components.FileBasedExampleGen(
input: Optional[tfx.types.Channel
] = None,
input_base: Optional[Text] = None,
input_config: Optional[Union[example_gen_pb2.Input, Dict[Text, Any]]] = None,
output_config: Optional[Union[example_gen_pb2.Output, Dict[Text, Any]]] = None,
custom_config: Optional[Union[example_gen_pb2.CustomConfig, Dict[Text, Any]]] = None,
range_config: Optional[Union[range_config_pb2.RangeConfig, Dict[Text, Any]]] = None,
output_data_format: Optional[int] = example_gen_pb2.FORMAT_TF_EXAMPLE,
example_artifacts: Optional[tfx.types.Channel
] = None,
custom_executor_spec: Optional[tfx.dsl.components.base.executor_spec.ExecutorSpec
] = None,
instance_name: Optional[Text] = None
)
The FileBasedExampleGen component is an API for getting file-based records into TFX pipelines. It consumes external files to generate examples which will be used by other internal components like StatisticsGen or Trainers. The component will also convert the input data into tf.record and generate train and eval example splits for downsteam components.
Example
_taxi_root = os.path.join(os.environ['HOME'], 'taxi')
_data_root = os.path.join(_taxi_root, 'data', 'simple')
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = FileBasedExampleGen(input_base=_data_root)
Args | |
---|---|
input
|
A Channel of type standard_artifacts.ExternalArtifact , which
includes one artifact whose uri is an external directory containing the
data files. (Deprecated by input_base)
|
input_base
|
an external directory containing the data files. |
input_config
|
An
example_gen_pb2.Input
instance, providing input configuration. If unset, input files will be
treated as a single split.
|
output_config
|
An example_gen_pb2.Output instance, providing the output configuration. If unset, default splits will be 'train' and 'eval' with size 2:1. |
custom_config
|
An optional example_gen_pb2.CustomConfig instance, providing custom configuration for executor. |
range_config
|
An optional range_config_pb2.RangeConfig instance, specifying the range of span values to consider. If unset, driver will default to searching for latest span with no restrictions. |
output_data_format
|
Payload format of generated data in output artifact, one of example_gen_pb2.PayloadFormat enum. |
example_artifacts
|
Channel of 'ExamplesPath' for output train and eval examples. |
custom_executor_spec
|
Optional custom executor spec overriding the default executor spec specified in the component attribute. |
instance_name
|
Optional unique instance name. Required only if multiple ExampleGen components are declared in the same pipeline. |
Attributes | |
---|---|
component_id
|
|
component_type
|
|
downstream_nodes
|
|
exec_properties
|
|
id
|
Node id, unique across all TFX nodes in a pipeline.
If |
inputs
|
|
outputs
|
|
type
|
|
upstream_nodes
|
Child Classes
Methods
add_downstream_node
add_downstream_node(
downstream_node
)
Experimental: Add another component that must run after this one.
This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.
Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.
It is symmetric with add_upstream_node
.
Args | |
---|---|
downstream_node
|
a component that must run after this node. |
add_upstream_node
add_upstream_node(
upstream_node
)
Experimental: Add another component that must run before this one.
This method enables task-based dependencies by enforcing execution order for synchronous pipelines on supported platforms. Currently, the supported platforms are Airflow, Beam, and Kubeflow Pipelines.
Note that this API call should be considered experimental, and may not work with asynchronous pipelines, sub-pipelines and pipelines with conditional nodes. We also recommend relying on data for capturing dependencies where possible to ensure data lineage is fully captured within MLMD.
It is symmetric with add_downstream_node
.
Args | |
---|---|
upstream_node
|
a component that must run before this node. |
from_json_dict
@classmethod
from_json_dict( dict_data: Dict[Text, Any] ) -> Any
Convert from dictionary data to an object.
get_class_type
@classmethod
get_class_type() -> Text
get_id
@classmethod
get_id( instance_name: Optional[Text] = None )
Gets the id of a node.
This can be used during pipeline authoring time. For example: from tfx.components import Trainer
resolver = ResolverNode(..., model=Channel( type=Model, producer_component_id=Trainer.get_id('my_trainer')))
Args | |
---|---|
instance_name
|
(Optional) instance name of a node. If given, the instance name will be taken into consideration when generating the id. |
Returns | |
---|---|
an id for the node. |
to_json_dict
to_json_dict() -> Dict[Text, Any]
Convert from an object to a JSON serializable dictionary.
with_id
with_id(
id: Text
) -> "BaseNode"
with_platform_config
with_platform_config(
config: message.Message
) -> "BaseComponent"
Attaches a proto-form platform config to a component.
The config will be a per-node platform-specific config.
Args | |
---|---|
config
|
platform config to attach to the component. |
Returns | |
---|---|
the same component itself. |
Class Variables | |
---|---|
EXECUTOR_SPEC |
Instance of tfx.dsl.components.base.executor_spec.ExecutorClassSpec
|