GraphLab workshop, Why should you care ?
Danny Bickson has announced the first GraphLab workshop.
The “…Why should you care ?” post reads in part as follows:
Designing and implementing efficient and provably correct parallel machine learning (ML) algorithms can be very challenging. Existing high-level parallel abstractions like MapReduce are often insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance.
In short it is a way to perform iterative algorithms on sparse graphs (parallel processing is also included). With the advent of cheap cloud computing, and the underlying need for post-processing in sparse recovery or advanced matrix factorization like dictionary learning, robust PCA and the like, it might be interesting to investigate the matter and even present something at this workshop….
Read the rest of “…Why should you care ?” for links to resources and examples. You will care. Promise.
And, if that doesn’t completely convince you, try:
A small Q&A with Danny Bickson on GraphLab.
Me? I am just hopeful for a small video cam somewhere in the audience with the slides/resources being posted.