Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 30, 2015

That Rascally Vowpal Wabbit (2015)

Filed under: Machine Learning,Vowpal Wabbit — Patrick Durusau @ 1:12 pm

The Rascally Vowpal Wabbit (2015) by Kai-Wei Chang, et al. (pdf of slides)

MLWave tweeted:

Latest Vowpal Wabbit Tutorial from NIPS 2015 (Learning to search + active learning + C# library + decision service)

Not the best organized slide deck but let me give you some performance numbers on Vowpal Wabbit (pages 26 in the pdf):

vw: 6 lines of code 10 seconds to train
CRFsgd: 1068 lines 6 minutes
CRF++: 777 lines hours

Named entity recognition (200 thousand words)

vw: 30 lines of code 5 seconds to train
CRFsgd: 1 minute (subopt accuracy)
CRF++: 10 minutes (subopt accuracy)
SVMstr: 876 lines 30 minutes (subopt accuracy)

Interested now?

Enjoy!

September 30, 2012

Vowpal Wabbit, version 7.0

Filed under: Machine Learning,Vowpal Wabbit — Patrick Durusau @ 4:59 pm

Vowpal Wabbit, version 7.0

From the post:

A new version of VW is out. The primary changes are:

  1. Learning Reductions: I’ve wanted to get learning reductions working and we’ve finally done it. Not everything is implemented yet, but VW now supports direct:
    1. Multiclass Classification –oaa or –ect.
    2. Cost Sensitive Multiclass Classification –csoaa or –wap.
    3. Contextual Bandit Classification –cb.
    4. Sequential Structured Prediction –searn or –dagger

    In addition, it is now easy to build your own custom learning reductions for various plausible uses: feature diddling, custom structured prediction problems, or alternate learning reductions. This effort is far from done, but it is now in a generally useful state. Note that all learning reductions inherit the ability to do cluster parallel learning.

  2. Library interface: VW now has a basic library interface. The library provides most of the functionality of VW, with the limitation that it is monolithic and nonreentrant. These will be improved over time.
  3. Windows port: The priority of a windows port jumped way up once we moved to Microsoft. The only feature which we know doesn’t work at present is automatic backgrounding when in daemon mode.
  4. New update rule: Stephane visited us this summer, and we fixed the default online update rule so that it is unit invariant.

There are also many other small updates including some contributed utilities that aid the process of applying and using VW.

Plans for the near future involve improving the quality of various items above, and of course better documentation: several of the reductions are not yet well documented.

A good test for your understanding of a subject is your ability to explain it.

Writing good documentation for projects like Vowpal Wabbit would benefit the project. And demonstrate your chops with the software. Something to consider.

September 7, 2011

Vowpal Wabbit 6.0

Filed under: Machine Learning,Vowpal Wabbit — Patrick Durusau @ 6:47 pm

Vowpal Wabbit 6.0

From the post:

I just released Vowpal Wabbit 6.0. Since the last version:

  1. VW is now 2-3 orders of magnitude faster at linear learning, primarily thanks to Alekh. Given the baseline, this is loads of fun, allowing us to easily deal with terafeature datasets, and dwarfing the scale of any other open source projects. The core improvement here comes from effective parallelization over kilonode clusters (either Hadoop or not). This code is highly scalable, so it even helps with clusters of size 2 (and doesn’t hurt for clusters of size 1). The core allreduce technique appears widely and easily reused—we’ve already used it to parallelize Conjugate Gradient, LBFGS, and two variants of online learning. We’ll be documenting how to do this more thoroughly, but for now “README_cluster” and associated scripts should provide a good starting point.
  2. The new LBFGS code from Miro seems to commonly dominate the existing conjugate gradient code in time/quality tradeoffs.
  3. The new matrix factorization code from Jake adds a core algorithm.
  4. We finally have basic persistent daemon support, again with Jake’s help.
  5. Adaptive gradient calculations can now be made dimensionally correct, following up on Paul’s post, yielding a better algorithm. And Nikos sped it up further with SSE native inverse square root.
  6. The LDA core is perhaps twice as fast after Paul educated us about SSE and representational gymnastics.

All of the above was done without adding significant new dependencies, so the code should compile easily.

The VW mailing list has been slowly growing, and is a good place to ask questions.

Powered by WordPress