Skip to yearly menu bar Skip to main content


Poster

Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback

Chicheng Zhang · Alekh Agarwal · Hal Daumé III · John Langford · Sahand Negahban

Pacific Ballroom #52

Keywords: [ Online Learning ] [ Bandits ]


Abstract:

We investigate the feasibility of learning from both fully-labeled supervised data and contextual bandit data. We specifically consider settings in which the underlying learning signal may be different between these two data sources. Theoretically, we state and prove no-regret algorithms for learning that is robust to divergences between the two sources. Empirically, we evaluate some of these algorithms on a large selection of datasets, showing that our approaches are feasible, and helpful in practice.

Live content is unavailable. Log in and register to view live content