Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.
Heavy sledding but also a good candidate for practicing How to Read a Paper.
I suggest that in part because of this comment by the authors in the conclusion:
In future work we plan to further exploit the fixed-width representations of holographic embeddings in complex scenarios, as they are especially suitable to model higher-arity relations (e.g., taughtAt(John, AI, MIT)) and facts about facts (e.g., believes(John, loves(Tom, Mary))).
Any representation where statements of “higher-arity relations” and “facts about facts” are not easily recorded and processed, is seriously impaired when it comes to capturing human knowledge.
Perhaps capturing only triples and “facts” explains the multiple failures of the U.S. intelligence community. It is working with tools that blind and geld its raw data. The rich nuances of intelligence data are lost in a grayish paste suitable for computer consumption.
A line of research worth following. Maximilian Nickel‘s homepage at MIT is a good place to start.
I first saw this in a tweet by Stefano Bertolo.