Main Profile

At A Glance

Non-Myopic Active Learning: A Reinforcement Learning Approac

Google Tech Talk March 16, 2009 ABSTRACT Non-Myopic Active Learning: A Reinforcement Learning Approach Presented by Pascal Poupart, University of Waterloo Active learning considers the problem of actively choosing the training data. This is particularly useful in settings where the training data is limited or comes with a price and therefore the learner needs to be "economical" in its data usage. Active learning can be particularly challenging in settings where the cost of the data varies, the learner only has partial control over the data it receives and the value of each data point depends on the information captured by the training data already received. In such situations, non-myopic strategies that take into account the long-term effects of each data selection are desirable. In this talk, I will describe how non-myopic active learning can be naturally formulated as a reinforcement learning problem. This formulation is particularly useful to deal with the exploration exploitation dilemma that arises when the learner hesitates between selecting data that minimizes the immediate cost (exploitation) and selecting data that maximizes the long-term information gain (exploration). I will describe a Bayesian approach to optimally tradeoff exploitation and exploration. I will also show how to derive an analytic solution for discrete problems and an algorithm called BEETLE.
Length: 01:11:46

Contact

Questions about Non-Myopic Active Learning: A Reinforcement Learning Approac

Want more info about Non-Myopic Active Learning: A Reinforcement Learning Approac? Get free advice from education experts and Noodle community members.

  • Answer

Ask a New Question