Jump to content

Upper Confidence Bound (UCB Algorithm)

From Wikipedia, the free encyclopedia
Upper Confidence Bound (UCB)
ClassMulti-armed bandit; Reinforcement learning
Data structureSequential reward observations
Worst-case performanceO(K) per round (K = number of arms)
Average performanceO(K)
Worst-case space complexityO(K)

Upper Confidence Bound (UCB) is a family of algorithms in machine learning and statistics for solving the multi-armed bandit problem and addressing the exploration–exploitation trade-off. UCB methods select actions by computing an upper confidence estimate of each action’s potential reward, thus balancing exploration of uncertain options with exploitation of those known to perform well. Introduced by Auer, Cesa-Bianchi & Fischer in 2002, UCB and its variants have become standard techniques in reinforcement learning, online advertising, recommender systems, clinical trials, and Monte Carlo tree search.[1][2][3]

Background

[edit]

The multi-armed bandit problem models a scenario where an agent chooses repeatedly among K options ("arms"), each yielding stochastic rewards, with the goal of maximizing the sum of collected rewards over time. The main challenge is the exploration–exploitation trade-off: the agent must explore lesser-tried arms to learn their rewards, yet exploit the best-known arm to maximize payoff.[3] Traditional ε-greedy or softmax strategies use randomness to force exploration; UCB algorithms instead use statistical confidence bounds to guide exploration more efficiently.[2]

The UCB1 algorithm

[edit]

UCB1, the original UCB method, maintains for each arm i:

  • the empirical mean reward _μ̂i_,
  • the count _ni_ of times arm i has been played.

At round _t_, it selects the arm maximizing:

Arms with _ni=0_ are initially played once. The bonus term √(2 ln t / ni) shrinks as _ni_ grows, ensuring exploration of less-tried arms and exploitation of high-mean arms.[1]

Pseudocode

[edit]
for each arm i:
    n[i] ← 0; Q[i] ← 0
for t from 1 to T do:
    for each arm i do
        if n[i] = 0 then
            select arm i
        else
            index[i] ← Q[i] + sqrt((2 * ln t) / n[i])
    select arm a with highest index[a]
    observe reward r
    n[a] ← n[a] + 1
    Q[a] ← Q[a] + (r - Q[a]) / n[a]

Theoretical properties

[edit]

Auer et al. proved that UCB1 achieves **logarithmic regret**: after _n_ rounds, the expected regret _R(n)_ satisfies

where _Δi_ is the gap between the optimal arm’s mean and arm _i_’s mean. Thus, average regret per round → 0 as _n_→∞, and UCB1 is near-optimal against the Lai-Robbins lower bound.[1][4]

Variants

[edit]

Several extensions improve or adapt UCB to different settings:

UCB2

[edit]

Introduced in the same paper, UCB2 divides plays into epochs controlled by a parameter α, reducing the constant in the regret bound at the cost of more complex scheduling.[1]

UCB1-Tuned

[edit]

Incorporates empirical variance _Vi_ to tighten the bonus: This often outperforms UCB1 in practice but lacks a simple regret proof.[1]

KL-UCB

[edit]

Replaces Hoeffding’s bound with a Kullback–Leibler divergence condition, yielding asymptotically optimal regret (constant = 1) for Bernoulli rewards.[5][6]

Bayesian UCB (Bayes-UCB)

[edit]

Computes the (1−δ)-quantile of a Bayesian posterior (e.g. Beta for Bernoulli) as the index. Proven asymptotically optimal under certain priors.[7]

Contextual UCB (e.g., LinUCB)

[edit]

Extends UCB to contextual bandits by estimating a linear reward model and confidence ellipsoids in parameter space. Widely used in news recommendation.[8]

Applications

[edit]

UCB algorithms’ simplicity and strong guarantees make them popular in:

  • Online advertising & A/B testing: adaptively allocate traffic to maximize conversion rates without fixed split ratios.[3]
  • Monte Carlo Tree Search: UCT uses UCB1 at each tree node to guide exploration in games like Go.[9][10]
  • Adaptive clinical trials: assign patients to treatments with highest upper confidence on success, improving outcomes over randomization.[11]
  • Recommender systems: personalized content selection under uncertainty.
  • Robotics & control: efficient exploration of unknown dynamics.

See also

[edit]

References

[edit]
  1. ^ a b c d e Auer, Peter; Cesa-Bianchi, Nicolo; Fischer, Paul (2002). “Finite-time Analysis of the Multiarmed Bandit Problem”. Journal of Machine Learning Research. 2: 235–282. PDF
  2. ^ a b Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement Learning: An Introduction. 2nd ed., MIT Press. ISBN 978-0-262-03924-6.
  3. ^ a b c Bubeck, Sébastien; Cesa-Bianchi, Nicolo (2012). “Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems”. Foundations and Trends in Machine Learning. 5 (1): 1–122. doi:10.1561/2200000024.
  4. ^ Lai, Tze Leung; Robbins, Herbert (1985). “Asymptotically Efficient Adaptive Allocation Rules”. Advances in Applied Mathematics. 6 (1): 4–22. doi:10.1016/0196-8858(85)90002-4.
  5. ^ Garivier, Aurélien; Cappé, Olivier (2011). “The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond”. Proceedings of the 24th Annual Conference on Learning Theory. 19: 359–376.
  6. ^ Maillard, Olivier-Alain; Munos, Rémi; Stoltz, Gilles (2011). “A Finite-Time Analysis of Multi-armed Bandits Problems with Kullback-Leibler Divergence”. Proceedings of the 24th Annual Conference on Learning Theory. 19: 592–617.
  7. ^ Kaufmann, Emilie; Cappé, Olivier; Garivier, Aurélien (2012). “Bayesian Upper Confidence Bounds for Bandit Problems”. Proceedings of the 25th Annual Conference on Neural Information Processing Systems. 1: 2177–2185.
  8. ^ Li, Lihong; Chu, Wei; Langford, John; Schapire, Robert E. (2010). “A contextual-bandit approach to personalized news article recommendation”. Proceedings of the 19th International Conference on World Wide Web. pp. 661–670. doi:10.1145/1772690.1772758.
  9. ^ Kocsis, László; Szepesvári, Csaba (2006). “Bandit based Monte-Carlo planning”. Proceedings of the 17th European Conference on Machine Learning. pp. 282–293. doi:10.1007/11871842_29.
  10. ^ Silver, David; Huang, Aja; Maddison, Chris J.; et al. (2016). “Mastering the game of Go with deep neural networks and tree search”. Nature. 529 (7587): 484–489. doi:10.1038/nature16961.
  11. ^ Safikhani, Maryam; Jo, Yuxin; et al. (2022). “Retrospective analysis indicates bandit approaches improve patient outcomes in clinical trials”. BMC Medical Research Methodology. 22: 117. doi:10.1186/s12874-022-01636-7. PMCID: PMC9410371.