An Architecture for Parallel Topic Models by Alexander Smola and Shravan Narayanamurthy.
Abstract:
This paper describes a high performance sampling architecture for inference of latent topic models on a cluster of workstations. Our system is faster than previous work by over an order of magnitude and it is capable of dealing with hundreds of millions of documents and thousands of topics.
The algorithm relies on a novel communication structure, namely the use of a distributed (key, value) storage for synchronizing the sampler state between computers. Our architecture entirely obviates the need for separate computation and synchronization phases. Instead, disk, CPU, and network are used simultaneously to achieve high performance. We show that this architecture is entirely general and that it can be extended easily to more sophisticated latent variable models such as n-grams and hierarchies.
Interesting how this key, value stuff keeps coming up these days.
The authors plan on making the codebase available for public use.
Updated 30 June 2011 to include the URL supplied by Sam Hunting. (Thanks Sam!)
Here’s the link to a PDF:
http://www.comp.nus.edu.sg/~vldb2010/proceedings/files/papers/R63.pdf
Comment by shunting — June 30, 2011 @ 12:41 pm