tfp.experimental.bayesopt.acquisition.MCMCReducer
Stay organized with collections
Save and categorize content based on your preferences.
Acquisition function for reducing over batch dimensions.
Inherits From: AcquisitionFunction
tfp.experimental.bayesopt.acquisition.MCMCReducer(
predictive_distribution,
observations,
seed=None,
acquisition_class=None,
reduce_dims=None,
**acquisition_kwargs
)
MCMCReducer
evaluates a base acquisition function and takes the mean of the
function values over the dimensions indicated by reduce_dims
. MCMCReducer
is useful for marginalizing over an MCMC sample of GP kernel hyperparameters,
for example.
Examples
Build and evaluate an acquisition function that computes Gaussian Process
Expected Improvement and then marginalizes over the leftmost batch dimension.
import numpy as np
import tensorflow_probability as tfp
tfd = tfp.distributions
tfpk = tfp.math.psd_kernels
tfp_acq = tfp.experimental.bayesopt.acquisition
# Sample 10 20-dimensional index points and associated observations.
index_points = np.random.uniform(size=[10, 20])
observations = np.random.uniform(size=[10])
# The kernel and GP have batch shape [32], representing a sample of
# hyperparameters that we want to marginalize over.
kernel_amplitudes = np.random.uniform(size=[32])
# Build a (batched) Gaussian Process regression model.
dist = tfd.GaussianProcessRegressionModel(
kernel=tfpk.MaternFiveHalves(amplitude=kernel_amplitudes),
observation_index_points=index_points,
observations=observations)
# Define an `MCMCReducer` with GP Expected Improvement.
mcmc_ei = tfp_acq.MCMCReducer(
predictive_distribution=dist,
observations=observations,
acquisition_class=GaussianProcessExpectedImprovement,
reduce_dims=0)
# Evaluate the acquisition function at a new set of index points,
# marginalizing over the hyperparameter batch.
pred_index_points = np.random.uniform(size=[6, 20])
acq_fn_vals = mcmc_ei(pred_index_points) # Has shape [6].
Args |
predictive_distribution
|
tfd.Distribution -like, the distribution over
observations at a set of index points.
|
observations
|
Float Tensor of observations.
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
|
acquisition_class
|
AcquisitionFunction -like callable.
|
reduce_dims
|
Axis of the acquisition function value array over which to
marginalize.
|
**acquisition_kwargs
|
Kwargs passed to acquisition_class .
|
Attributes |
acquisition
|
|
is_parallel
|
Python bool indicating whether the acquisition function is parallel.
Parallel (batched) acquisition functions evaluate batches of points rather
than single points.
|
observations
|
Float Tensor of observations.
|
predictive_distribution
|
The distribution over observations at a set of index points.
|
reduce_dims
|
|
seed
|
PRNG seed.
|
Methods
__call__
View source
__call__(
**kwargs
)
Call self as a function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.bayesopt.acquisition.MCMCReducer\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/bayesopt/acquisition/acquisition_function.py#L95-L190) |\n\nAcquisition function for reducing over batch dimensions.\n\nInherits From: [`AcquisitionFunction`](../../../../tfp/experimental/bayesopt/acquisition/AcquisitionFunction) \n\n tfp.experimental.bayesopt.acquisition.MCMCReducer(\n predictive_distribution,\n observations,\n seed=None,\n acquisition_class=None,\n reduce_dims=None,\n **acquisition_kwargs\n )\n\n`MCMCReducer` evaluates a base acquisition function and takes the mean of the\nfunction values over the dimensions indicated by `reduce_dims`. `MCMCReducer`\nis useful for marginalizing over an MCMC sample of GP kernel hyperparameters,\nfor example.\n\n#### Examples\n\nBuild and evaluate an acquisition function that computes Gaussian Process\nExpected Improvement and then marginalizes over the leftmost batch dimension. \n\n import numpy as np\n import tensorflow_probability as tfp\n\n tfd = tfp.distributions\n tfpk = tfp.math.psd_kernels\n tfp_acq = tfp.experimental.bayesopt.acquisition\n\n # Sample 10 20-dimensional index points and associated observations.\n index_points = np.random.uniform(size=[10, 20])\n observations = np.random.uniform(size=[10])\n\n # The kernel and GP have batch shape [32], representing a sample of\n # hyperparameters that we want to marginalize over.\n kernel_amplitudes = np.random.uniform(size=[32])\n\n # Build a (batched) Gaussian Process regression model.\n dist = tfd.GaussianProcessRegressionModel(\n kernel=tfpk.MaternFiveHalves(amplitude=kernel_amplitudes),\n observation_index_points=index_points,\n observations=observations)\n\n # Define an `MCMCReducer` with GP Expected Improvement.\n mcmc_ei = tfp_acq.MCMCReducer(\n predictive_distribution=dist,\n observations=observations,\n acquisition_class=GaussianProcessExpectedImprovement,\n reduce_dims=0)\n\n # Evaluate the acquisition function at a new set of index points,\n # marginalizing over the hyperparameter batch.\n pred_index_points = np.random.uniform(size=[6, 20])\n acq_fn_vals = mcmc_ei(pred_index_points) # Has shape [6].\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|---------------------------|---------------------------------------------------------------------------------------|\n| `predictive_distribution` | `tfd.Distribution`-like, the distribution over observations at a set of index points. |\n| `observations` | `Float` `Tensor` of observations. |\n| `seed` | PRNG seed; see tfp.random.sanitize_seed for details. |\n| `acquisition_class` | `AcquisitionFunction`-like callable. |\n| `reduce_dims` | Axis of the acquisition function value array over which to marginalize. |\n| `**acquisition_kwargs` | Kwargs passed to `acquisition_class`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|---------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `acquisition` | \u003cbr /\u003e \u003cbr /\u003e |\n| `is_parallel` | Python `bool` indicating whether the acquisition function is parallel. \u003cbr /\u003e Parallel (batched) acquisition functions evaluate batches of points rather than single points. |\n| `observations` | Float `Tensor` of observations. |\n| `predictive_distribution` | The distribution over observations at a set of index points. |\n| `reduce_dims` | \u003cbr /\u003e \u003cbr /\u003e |\n| `seed` | PRNG seed. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/bayesopt/acquisition/acquisition_function.py#L188-L190) \n\n __call__(\n **kwargs\n )\n\nCall self as a function."]]