Module: tf_agents.bandits.policies.linear_bandit_policy
Stay organized with collections
Save and categorize content based on your preferences.
Linear Bandit Policy.
LinUCB and Linear Thompson Sampling policies derive from this class.
This linear policy handles two main forms of feature input.
- A single global feature is received per time step. In this case, the policy
maintains an independent linear reward model for each arm.
- Apart from the global feature as in case 1, an arm-feature vector is
received for each arm in every time step. In this case, only one model is
maintained by the policy, and the reward estimates are calculated for every arm
by using their own per-arm features.
The above two cases can be triggered by setting the boolean parameter
accepts_per_arm_features
appropriately.
A detailed explanation for the two above cases can be found in the paper
"Thompson Sampling for Contextual Bandits with Linear Payoffs",
Shipra Agrawal, Navin Goyal, ICML 2013
(http://proceedings.mlr.press/v28/agrawal13.pdf), and its supplementary material
(http://proceedings.mlr.press/v28/agrawal13-supp.pdf).
Classes
class ExplorationStrategy
: Possible exploration strategies.
class LinearBanditPolicy
: Linear Bandit Policy to be used by LinUCB, LinTS and possibly others.
Other Members |
absolute_import
|
Instance of __future__._Feature
|
division
|
Instance of __future__._Feature
|
print_function
|
Instance of __future__._Feature
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[null,null,["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf_agents.bandits.policies.linear_bandit_policy\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/policies/linear_bandit_policy.py) |\n\nLinear Bandit Policy.\n\nLinUCB and Linear Thompson Sampling policies derive from this class.\n\nThis linear policy handles two main forms of feature input.\n\n1. A single global feature is received per time step. In this case, the policy maintains an independent linear reward model for each arm.\n2. Apart from the global feature as in case 1, an arm-feature vector is received for each arm in every time step. In this case, only one model is maintained by the policy, and the reward estimates are calculated for every arm by using their own per-arm features.\n\nThe above two cases can be triggered by setting the boolean parameter\n`accepts_per_arm_features` appropriately.\n\nA detailed explanation for the two above cases can be found in the paper\n\"Thompson Sampling for Contextual Bandits with Linear Payoffs\",\nShipra Agrawal, Navin Goyal, ICML 2013\n(\u003chttp://proceedings.mlr.press/v28/agrawal13.pdf\u003e), and its supplementary material\n(\u003chttp://proceedings.mlr.press/v28/agrawal13-supp.pdf\u003e).\n\nClasses\n-------\n\n[`class ExplorationStrategy`](../../../tf_agents/bandits/policies/linear_bandit_policy/ExplorationStrategy): Possible exploration strategies.\n\n[`class LinearBanditPolicy`](../../../tf_agents/bandits/policies/linear_bandit_policy/LinearBanditPolicy): Linear Bandit Policy to be used by LinUCB, LinTS and possibly others.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Other Members ------------- ||\n|-----------------|-----------------------------------|\n| absolute_import | Instance of `__future__._Feature` |\n| division | Instance of `__future__._Feature` |\n| print_function | Instance of `__future__._Feature` |\n\n\u003cbr /\u003e"]]