tfl.conditional_cdf.cdf_fn
Stay organized with collections
Save and categorize content based on your preferences.
Maps inputs
through a CDF function specified by keypoint parameters.
@tf.function
tfl.conditional_cdf.cdf_fn(
inputs: tf.Tensor,
location_parameters: tf.Tensor,
scaling_parameters: Optional[tf.Tensor] = None,
units: int = 1,
activation: str = 'relu6',
reduction: str = 'mean',
sparsity_factor: int = 1,
scaling_exp_transform_multiplier: Optional[float] = None,
return_derived_parameters: bool = False
) -> Union[tf.Tensor, Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]
cdf_fn
is similar to tfl.layers.CDF
, which is an additive / multiplicative
average of a few shifted and scaled sigmoid
or relu6
basis functions,
with the difference that the functions are parametrized by the provided
parameters instead of learnable weights belonging to a tfl.layers.CDF
layer.
These parameters can be one of:
- constants,
- trainable variables,
- outputs from other TF modules.
For inputs of shape (batch_size, input_dim)
, two sets of free-form
parameters are used to configure the CDF function:
location_parameters
for where to place the sigmoid / relu6 transformation
basis,
scaling_parameters
(optional) for the horizontal scaling before applying
the transformation basis.
The transformation per dimension is x -> activation(scale * (x - location))
,
where:
scale
(specified via scaling_parameter
) is the input scaling for each
dimension and needs to be strictly positive for the CDF function to become
monotonic. If needed, you can set scaling_exp_transform_multiplier
to get
scale = exp(scaling_parameter * scaling_exp_transform_multiplier)
and
guarantees strict positivity.
location
(specified via location_parameter
) is the input shift. Notice
for relu6
this is where the transformation starts to be nonzero, whereas for
sigmoid
this is where the transformation hits 0.5.
activation
is either sigmoid
or relu6
(for relu6 / 6
).
An optional reduction
operation will compute the additive / multiplicative
average for the input dims after their individual CDF transformation. mean
and geometric_mean
are supported if sepcified.
sparsity_factor
decides the level of sparsity during reduction. For
instance, default of sparsity = 1
calculates the average of all input
dims, whereas sparsity = 2
calculates the average of every other input
dim, and so on.
|
We denote num_functions as the number of sigmoid or relu6 / 6 basis
functions used for each CDF transformation.
inputs should be:
location_parameters should be:
(batch_size, input_dim, num_functions, units // sparsity_factor) .
scaling_parameters when provided should be broadcast friendly
with location_parameters , e.g. one of
(batch_size, input_dim, 1, 1) ,
(batch_size, input_dim, num_functions, 1) ,
(batch_size, input_dim, 1, units // sparsity_factor) ,
(batch_size, input_dim, num_functions, units // sparsity_factor) .
|
Args |
inputs
|
inputs to the CDF function.
|
location_parameters
|
parameters for deciding the locations of the
transformations.
|
scaling_parameters
|
parameters for deciding the horizontal scaling of the
transformations.
|
units
|
output dimension.
|
activation
|
either sigmoid or relu6 for selecting the transformation.
|
reduction
|
either mean , geometric_mean , or none to specify whether to
perform averaging and which average to perform.
|
sparsity_factor
|
deciding the level of sparsity during reduction.
input_dim and units should both be divisible by sparsity_factor .
|
scaling_exp_transform_multiplier
|
if provided, will be used inside an
exponential transformation for scaling_parameters . This can be useful if
scaling_parameters is free-form.
|
return_derived_parameters
|
Whether location_parameters and
scaling_parameters should be output along with the model output (e.g.
for loss function computation purpoeses).
|
Returns |
If return_derived_parameters = False :
- The CDF transformed outputs as a tensor with shape either
(batch_size, units) if reduction = 'mean' / 'geometric_mean' , or
(batch_size, input_dim // sparsity_factor, units) if
reduction = 'none' .
If return_derived_parameters = True :
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-08-02 UTC.
[null,null,["Last updated 2024-08-02 UTC."],[],[],null,["# tfl.conditional_cdf.cdf_fn\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/lattice/blob/v2.1.1/tensorflow_lattice/python/conditional_cdf.py#L118-L275) |\n\nMaps `inputs` through a CDF function specified by keypoint parameters. \n\n @tf.function\n tfl.conditional_cdf.cdf_fn(\n inputs: tf.Tensor,\n location_parameters: tf.Tensor,\n scaling_parameters: Optional[tf.Tensor] = None,\n units: int = 1,\n activation: str = 'relu6',\n reduction: str = 'mean',\n sparsity_factor: int = 1,\n scaling_exp_transform_multiplier: Optional[float] = None,\n return_derived_parameters: bool = False\n ) -\u003e Union[tf.Tensor, Tuple[tf.Tensor, tf.Tensor, tf.Tensor]]\n\n`cdf_fn` is similar to [`tfl.layers.CDF`](../../tfl/layers/CDF), which is an additive / multiplicative\naverage of a few shifted and scaled `sigmoid` or `relu6` basis functions,\nwith the difference that the functions are parametrized by the provided\nparameters instead of learnable weights belonging to a [`tfl.layers.CDF`](../../tfl/layers/CDF) layer.\n\nThese parameters can be one of:\n\n- constants,\n- trainable variables,\n- outputs from other TF modules.\n\nFor inputs of shape `(batch_size, input_dim)`, two sets of free-form\nparameters are used to configure the CDF function:\n\n- `location_parameters` for where to place the sigmoid / relu6 transformation basis,\n- `scaling_parameters` (optional) for the horizontal scaling before applying the transformation basis.\n\nThe transformation per dimension is `x -\u003e activation(scale * (x - location))`,\nwhere:\n\n- `scale` (specified via `scaling_parameter`) is the input scaling for each dimension and needs to be strictly positive for the CDF function to become monotonic. If needed, you can set `scaling_exp_transform_multiplier` to get `scale = exp(scaling_parameter * scaling_exp_transform_multiplier)` and guarantees strict positivity.\n- `location` (specified via `location_parameter`) is the input shift. Notice for `relu6` this is where the transformation starts to be nonzero, whereas for `sigmoid` this is where the transformation hits 0.5.\n- `activation` is either `sigmoid` or `relu6` (for `relu6 / 6`).\n\nAn optional `reduction` operation will compute the additive / multiplicative\naverage for the input dims after their individual CDF transformation. `mean`\nand `geometric_mean` are supported if sepcified.\n\n`sparsity_factor` decides the level of sparsity during reduction. For\ninstance, default of `sparsity = 1` calculates the average of *all* input\ndims, whereas `sparsity = 2` calculates the average of *every other* input\ndim, and so on.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Input shape ----------- ||\n|---|---|\n| We denote `num_functions` as the number of `sigmoid` or `relu6 / 6` basis functions used for each CDF transformation. \u003cbr /\u003e `inputs` should be: - `(batch_size, input_dim)`. `location_parameters` should be: - `(batch_size, input_dim, num_functions, units // sparsity_factor)`. `scaling_parameters` when provided should be broadcast friendly with `location_parameters`, e.g. one of - `(batch_size, input_dim, 1, 1)`, - `(batch_size, input_dim, num_functions, 1)`, - `(batch_size, input_dim, 1, units // sparsity_factor)`, - `(batch_size, input_dim, num_functions, units // sparsity_factor)`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | inputs to the CDF function. |\n| `location_parameters` | parameters for deciding the locations of the transformations. |\n| `scaling_parameters` | parameters for deciding the horizontal scaling of the transformations. |\n| `units` | output dimension. |\n| `activation` | either `sigmoid` or `relu6` for selecting the transformation. |\n| `reduction` | either `mean`, `geometric_mean`, or `none` to specify whether to perform averaging and which average to perform. |\n| `sparsity_factor` | deciding the level of sparsity during reduction. `input_dim` and `units` should both be divisible by `sparsity_factor`. |\n| `scaling_exp_transform_multiplier` | if provided, will be used inside an exponential transformation for `scaling_parameters`. This can be useful if `scaling_parameters` is free-form. |\n| `return_derived_parameters` | Whether `location_parameters` and `scaling_parameters` should be output along with the model output (e.g. for loss function computation purpoeses). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| If `return_derived_parameters = False`: \u003cbr /\u003e - The CDF transformed outputs as a tensor with shape either `(batch_size, units)` if `reduction = 'mean' / 'geometric_mean'`, or `(batch_size, input_dim // sparsity_factor, units)` if `reduction = 'none'`. If `return_derived_parameters = True`: - A tuple of three elements: 1. The CDF transformed outputs. 2. `location_parameters`. 3. `scaling_parameters`, with `exp` transformation applied if specified. ||\n\n\u003cbr /\u003e"]]