Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 5, 2015

Achieving All with No Parameters: Adaptive NormalHedge

Filed under: Machine Learning — Patrick Durusau @ 3:50 pm

Achieving All with No Parameters: Adaptive NormalHedge by Haipeng Luo and Robert E. Schapire.

Abstract:

We study the classic online learning problem of predicting with expert advice, and propose a truly parameter-free and adaptive algorithm that achieves several objectives simultaneously without using any prior information. The main component of this work is an improved version of the NormalHedge.DT algorithm (Luo and Schapire, 2014), called AdaNormalHedge. On one hand, this new algorithm ensures small regret when the competitor has small loss and almost constant regret when the losses are stochastic. On the other hand, the algorithm is able to compete with any convex combination of the experts simultaneously, with a regret in terms of the relative entropy of the prior and the competitor. This resolves an open problem proposed by Chaudhuri et al. (2009) and Chernov and Vovk (2010). Moreover, we extend the results to the sleeping expert setting and provide two applications to illustrate the power of AdaNormalHedge: 1) competing with time-varying unknown competitors and 2) predicting almost as well as the best pruning tree. Our results on these applications significantly improve previous work from different aspects, and a special case of the first application resolves another open problem proposed by Warmuth and Koolen (2014) on whether one can simultaneously achieve optimal shifting regret for both adversarial and stochastic losses.

The terminology, “sleeping expert,” is particularly amusing.

Probably more correct to say “unpaid expert, because unpaid experts, the cleverer ones, don’t offer advice.

I first saw this in a tweet by Nikete.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress