Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Learning Orthogonal Projections in Linear Bandits

lib:b83a1129967101d1 (v1.0.0)

Authors: Qiyu Kang,Wee Peng Tay
ArXiv: 1906.10981
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/1906.10981v3


In a linear stochastic bandit model, each arm is a vector in an Euclidean space and the observed return at each time step is an unknown linear function of the chosen arm at that time step. In this paper, we investigate the problem of learning the best arm in a linear stochastic bandit model, where each arm's expected reward is an unknown linear function of the projection of the arm onto a subspace. We call this the projection reward. Unlike the classical linear bandit problem in which the observed return corresponds to the reward, the projection reward at each time step is unobservable. Such a model is useful in recommendation applications where the observed return includes corruption by each individual's biases, which we wish to exclude in the learned model. In the case where there are finitely many arms, we develop a strategy to achieve $O(|\bbD|\log n)$ regret, where $n$ is the number of time steps and $|\bbD|$ is the number of arms. In the case where each arm is chosen from an infinite compact set, our strategy achieves $O(n^{2/3}(\log{n})^{1/2})$ regret. Experiments verify the efficiency of our strategy.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!