Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors by Pentti Kanerva.
Reflective Random Indexing and indirect inference… cites Kanerva as follows:
Random Indexing (RI) [cites omitted] has recently emerged as a scalable alternative to LSA for the derivation of spatial models of semantic distance from large text corpora. For a thorough introduction to Random Indexing and hyper-dimensional computing in general, see [Kanerva, this paper] [cite omitted].
Kanerva’s abstract:
The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.
This one will take a while to read and digest but I will be posting on it and the further reading it cites in the not too distant future.