Delivery included to the United States

Bandit Algorithms

Bandit Algorithms

Hardback (16 Jul 2020)

Save $1.86

  • RRP $61.07
  • $59.21
Add to basket

Includes delivery to the United States

10+ copies available online - Usually dispatched within 2-3 weeks

free Reserve & collect

Copies available at Blackwell's Oxford Broad Street

Reserve in Store |  Check stock elsewhere

Publisher's Synopsis

Decision-making in the face of uncertainty is a significant challenge in machine learning, and the multi-armed bandit model is a commonly used framework to address it. This comprehensive and rigorous introduction to the multi-armed bandit problem examines all the major settings, including stochastic, adversarial, and Bayesian frameworks. A focus on both mathematical intuition and carefully worked proofs makes this an excellent reference for established researchers and a helpful resource for graduate students in computer science, engineering, statistics, applied mathematics and economics. Linear bandits receive special attention as one of the most useful models in applications, while other chapters are dedicated to combinatorial bandits, ranking, non-stationary problems, Thompson sampling and pure exploration. The book ends with a peek into the world beyond bandits with an introduction to partial monitoring and learning in Markov decision processes.

About the Publisher

Cambridge University Press

Cambridge University Press dates from 1534 and is part of the University of Cambridge. We further the University's mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence.

Book information

ISBN: 9781108486828
Publisher: Cambridge University Press
Imprint: Cambridge University Press
Pub date:
DEWEY: 519.6
DEWEY edition: 23
Language: English
Number of pages: 536
Weight: 1172g
Height: 181mm
Width: 253mm
Spine width: 35mm