Main Profile

At A Glance

The Asymptotic Performance of AdaBoost

Google Tech TalksMay 24, 2007ABSTRACTMany popular classification algorithms, including AdaBoost and the support vector machine, minimize a cost function that can be viewed as a convex surrogate of the 0-1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. In this talk, we consider the universal consistency of such methods: does the risk, or expectation of the 0-1 loss, approach its optimal value, no matter what i.i.d. process generates. Credits: Speaker:Peter Bartlett
Length: 45:37

Contact

Questions about The Asymptotic Performance of AdaBoost

Want more info about The Asymptotic Performance of AdaBoost? Get free advice from education experts and Noodle community members.

  • Answer

Ask a New Question