DARPA* project contributes graphical models toolkit to GraphLab by Danny Bickson.
From the post:
We are proud to announce that following many months of hard work, Scott Richardson from Vision Systems Inc. has contributed a graphical models toolkit to GraphLab. Here is a some information about their project:
Last year Vision Systems, Inc. (VSI) partnered with Systems & Technology Research (STR) and started working on a DARPA* project to develop intelligent, automatic, and robust computer vision technologies based on realistic conditions. Our goal is to develop a software system that lets users ask queries of photo content, such as “Does this person look familiar?” or “Where is this building located?” If successful, our technology would alert people to scenes that warrant their attention.
We had an immediate need for a solid, scalable graph-parallel computation engine to replace our internal belief propagation implementation. We quickly gravitated to GraphLab. Using this framework, we designed the Factor Graph toolkit based on Joseph Gonzalez’s initial implementation. A factor graph, a type of graphical model, is a bipartite graph composed of two types of vertices: variable nodes and factor nodes. The Factor Graph toolkit is able to translate a factor graph into a graphlab distributed-graph and perform inference using a vertex-program which implements the well known message-passing algorithm belief propagation. Both belief propagation and factor graphs are general tools that have applications in a variety of domains.
We are very excited to get to work on key problems in the Machine Learning/Machine Vision field and to be a part of the powerful communities, like GraphLab, that make it possible.
I admit to not always being fond of DARPA projects but every now and again they fund something worthwhile.
If machine vision becomes robust enough, you could start a deduped porn service. 😉 I am sure other use cases will come to mind.
If you haven’t looked at GraphLab recently, you should.