tfp.experimental.bayesopt.acquisition.WeightedPowerScalarization
Stay organized with collections
Save and categorize content based on your preferences.
Weighted power scalarization acquisition function.
Inherits From: AcquisitionFunction
tfp.experimental.bayesopt.acquisition.WeightedPowerScalarization(
predictive_distribution,
observations,
seed=None,
acquisition_function_classes=None,
acquisition_kwargs_list=None,
power=None,
weights=None
)
Given a multi-task distribution over T
tasks, the weighted power
scalarization acquisition function computes
(sum_t w_t |a_t(x)|^p)^(1/p)
where:
a_t
are the list of acquisition_function_classes
.
w_t
are weights
.
p
is power
.
By default p
is None
which corresponds to a value of inf
. This is
Chebyshev scalarization: max w_t |a_t(x)|
, and for non inf
p
corresponds
to weighted power scalarization.
Examples
Build and evaluate a Weighted Power Scalarization acquisition function.
import numpy as np
import tensorflow_probability as tfp
tfpk = tfp.math.psd_kernels
tfpke = tfp.experimental.psd_kernels
tfde = tfp.experimental.distributions
tfp_acq = tfp.experimental.bayesopt.acquisition
kernel = tfpk.ExponentiatedQuadratic()
mt_kernel = tfpke.Independent(base_kernel=kernel, num_tasks=4)
# Sample 10 20-dimensional index points and associated 4-dimensional
# observations.
index_points = np.random.uniform(size=[10, 20])
observations = np.random.uniform(size=[10, 4])
# Build a multitask GP.
dist = tfde.MultiTaskGaussianProcessRegressionModel(
kernel=mt_kernel,
observation_index_points=index_points,
observations,
observation_noise_variance=1e-4)
# Choose weights and acquisition functions for each task.
weights = np.array([0.8, 1., 1.1, 0.5])
acquisition_function_classes = [
tfp_acq.GaussianProcessExpectedImprovement,
tfp_acq.GaussianProcessUpperConfidenceBound,
tfp_acq.GaussianProcessExpectedImprovement,
tfp_acq.GaussianProcessUpperConfidenceBound]
# Build the acquisition function.
cheb_scalar_fn = tfp_acq.WeightedPowerScalarization(
predictive_distribution=dist,
acquisition_function_classes=acquisition_function_classes,
observations=observations,
weights=weights)
# Evaluate the acquisition function on 6 predictive index points. Note that
# `index_points` must be passed as a keyword arg.
pred_index_points = np.random.uniform(size=[6, 20])
acq_fn_vals = cheb_scalar_fn(index_points=pred_index_points)
Args |
predictive_distribution
|
tfd.Distribution -like, the distribution over
observations at a set of index points (expected to be a
tfd.MultiTaskGaussianProcess or
tfd.MultiTaskGaussianProcessRegressionModel ).
|
observations
|
Float Tensor of observed function values. Shape has the
form [b1, ..., bB, N, T] , where N is the number of index points and
T is the number of tasks (such that the event shape of
predictive_distribution is [N, T] ) and [b1, ..., bB] is
broadcastable with the batch shape of predictive_distribution .
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
|
acquisition_function_classes
|
Callable acquisition function, one per
task.
|
acquisition_kwargs_list
|
Kwargs to pass in to acquisition function.
|
power
|
Numpy float . When this is set to None , this corresponds to
Chebyshev scalarization.
|
weights
|
Tensor of shape [T] where, T is the number of tasks.
|
Attributes |
acquisition_function_classes
|
|
acquisition_kwargs_list
|
|
is_parallel
|
Python bool indicating whether the acquisition function is parallel.
Parallel (batched) acquisition functions evaluate batches of points rather
than single points.
|
observations
|
Float Tensor of observations.
|
power
|
|
predictive_distribution
|
The distribution over observations at a set of index points.
|
seed
|
PRNG seed.
|
weights
|
|
Methods
__call__
View source
__call__(
**kwargs
)
Computes the weighted power scalarization.
Args |
**kwargs
|
Keyword args passed on to the mean and stddev methods of
predictive_distribution .
|
Returns |
Weighted power scalarization at index points implied by
predictive_distribution (or overridden in **kwargs ).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.bayesopt.acquisition.WeightedPowerScalarization\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/bayesopt/acquisition/weighted_power_scalarization.py#L28-L226) |\n\nWeighted power scalarization acquisition function.\n\nInherits From: [`AcquisitionFunction`](../../../../tfp/experimental/bayesopt/acquisition/AcquisitionFunction) \n\n tfp.experimental.bayesopt.acquisition.WeightedPowerScalarization(\n predictive_distribution,\n observations,\n seed=None,\n acquisition_function_classes=None,\n acquisition_kwargs_list=None,\n power=None,\n weights=None\n )\n\nGiven a multi-task distribution over `T` tasks, the weighted power\nscalarization acquisition function computes\n\n`(sum_t w_t |a_t(x)|^p)^(1/p)`\n\nwhere:\n\n- `a_t` are the list of `acquisition_function_classes`.\n- `w_t` are `weights`.\n- `p` is `power`.\n\nBy default `p` is `None` which corresponds to a value of `inf`. This is\nChebyshev scalarization: `max w_t |a_t(x)|`, and for non `inf` `p` corresponds\nto weighted power scalarization.\n\n#### Examples\n\nBuild and evaluate a Weighted Power Scalarization acquisition function. \n\n import numpy as np\n import tensorflow_probability as tfp\n\n tfpk = tfp.math.psd_kernels\n tfpke = tfp.experimental.psd_kernels\n tfde = tfp.experimental.distributions\n tfp_acq = tfp.experimental.bayesopt.acquisition\n\n kernel = tfpk.ExponentiatedQuadratic()\n mt_kernel = tfpke.Independent(base_kernel=kernel, num_tasks=4)\n\n # Sample 10 20-dimensional index points and associated 4-dimensional\n # observations.\n index_points = np.random.uniform(size=[10, 20])\n observations = np.random.uniform(size=[10, 4])\n\n # Build a multitask GP.\n dist = tfde.MultiTaskGaussianProcessRegressionModel(\n kernel=mt_kernel,\n observation_index_points=index_points,\n observations,\n observation_noise_variance=1e-4)\n\n # Choose weights and acquisition functions for each task.\n weights = np.array([0.8, 1., 1.1, 0.5])\n acquisition_function_classes = [\n tfp_acq.GaussianProcessExpectedImprovement,\n tfp_acq.GaussianProcessUpperConfidenceBound,\n tfp_acq.GaussianProcessExpectedImprovement,\n tfp_acq.GaussianProcessUpperConfidenceBound]\n\n # Build the acquisition function.\n cheb_scalar_fn = tfp_acq.WeightedPowerScalarization(\n predictive_distribution=dist,\n acquisition_function_classes=acquisition_function_classes,\n observations=observations,\n weights=weights)\n\n # Evaluate the acquisition function on 6 predictive index points. Note that\n # `index_points` must be passed as a keyword arg.\n pred_index_points = np.random.uniform(size=[6, 20])\n acq_fn_vals = cheb_scalar_fn(index_points=pred_index_points)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `predictive_distribution` | `tfd.Distribution`-like, the distribution over observations at a set of index points (expected to be a `tfd.MultiTaskGaussianProcess` or `tfd.MultiTaskGaussianProcessRegressionModel`). |\n| `observations` | `Float` `Tensor` of observed function values. Shape has the form `[b1, ..., bB, N, T]`, where `N` is the number of index points and `T` is the number of tasks (such that the event shape of `predictive_distribution` is `[N, T]`) and `[b1, ..., bB]` is broadcastable with the batch shape of `predictive_distribution`. |\n| `seed` | PRNG seed; see tfp.random.sanitize_seed for details. |\n| `acquisition_function_classes` | `Callable` acquisition function, one per task. |\n| `acquisition_kwargs_list` | Kwargs to pass in to acquisition function. |\n| `power` | Numpy `float`. When this is set to `None`, this corresponds to Chebyshev scalarization. |\n| `weights` | `Tensor` of shape `[T]` where, `T` is the number of tasks. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `acquisition_function_classes` | \u003cbr /\u003e \u003cbr /\u003e |\n| `acquisition_kwargs_list` | \u003cbr /\u003e \u003cbr /\u003e |\n| `is_parallel` | Python `bool` indicating whether the acquisition function is parallel. \u003cbr /\u003e Parallel (batched) acquisition functions evaluate batches of points rather than single points. |\n| `observations` | Float `Tensor` of observations. |\n| `power` | \u003cbr /\u003e \u003cbr /\u003e |\n| `predictive_distribution` | The distribution over observations at a set of index points. |\n| `seed` | PRNG seed. |\n| `weights` | \u003cbr /\u003e \u003cbr /\u003e |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/bayesopt/acquisition/weighted_power_scalarization.py#L173-L226) \n\n __call__(\n **kwargs\n )\n\nComputes the weighted power scalarization.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|-----------------------------------------------------------------------------------------|\n| `**kwargs` | Keyword args passed on to the `mean` and `stddev` methods of `predictive_distribution`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| Weighted power scalarization at index points implied by `predictive_distribution` (or overridden in `**kwargs`). ||\n\n\u003cbr /\u003e"]]