Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 16, 2011

Semantic Vectors

Filed under: Implicit Associations,Indirect Inference,Random Indexing,Semantic Vectors — Patrick Durusau @ 7:07 pm

Semantic Vectors

From the webpage:

Semantic Vector indexes, created by applying a Random Projection algorithm to term-document matrices created using Apache Lucene. The package was created as part of a project by the University of Pittsburgh Office of Technology Management, and is now developed and maintained by contributors from the University of Texas, Queensland University of Technology, the Austrian Research Institute for Artificial Intelligence, Google Inc., and other institutions and individuals.

The package creates a WordSpace model, of the kind developed by Stanford University’s Infomap Project and other researchers during the 1990s and early 2000s. Such models are designed to represent words and documents in terms of underlying concepts, and as such can be used for many semantic (concept-aware) matching tasks such as automatic thesaurus generation, knowledge representation, and concept matching.

The Semantic Vectors package uses a Random Projection algorithm, a form of automatic semantic analysis. Other methods supported by the package include Latent Semantic Analysis (LSA) and Reflective Random Indexing. Unlike many other methods, Random Projection does not rely on the use of computationally intensive matrix decomposition algorithms like Singular Value Decomposition (SVD). This makes Random Projection a much more scalable technique in practice. Our application of Random Projection for Natural Language Processing (NLP) is descended from Pentti Kanerva’s work on Sparse Distributed Memory, which in semantic analysis and text mining, this method has also been called Random Indexing. A growing number of researchers have applied Random Projection to NLP tasks, demonstrating:

  • Semantic performance comparable with other forms of Latent Semantic Analysis.
  • Significant computational performance advantages in creating and maintaining models.

So, after reading about random indexing, etc., you can take those techniques out for a spin. It doesn’t get any better than that!

Distributional Semantics

Filed under: Distributional Semantics,Indexing,Indirect Inference,Random Indexing — Patrick Durusau @ 7:06 pm

Distributional Semantics.

Trevor Cohen’s, co-author with Roger Schvaneveldt, and Dominic Widdows of Reflective Random Indexing and indirect inference…, page on distributional semantics which starts with:

Empirical Distributional Semantics is an emerging discipline that is primarily concerned with the derivation of semantics (or meaning) from the distribution of terms in natural language text. My research in DS is concerned primarily with spatial models of meaning, in which terms are projected into high-dimensional semantic space, and an estimate of their semantic relatedness is derived from the distance between them in this space.

The relations derived by these models have many useful applications in biomedicine and beyond. A particularly interesting property of distributional semantics models is their capacity to recognize connections between terms that do not occur together in the same document, as this has implications for knowledge discovery. In many instances it is possible also to reveal a plausible pathway linking these terms by using the distances estimated by distributional semantic models to generate a network representation, and using Pathfinder networks (PFNETS) to reveal the most significant links in this network, as shown in the example below:

Links to projects, software and other cool stuff! Making a separate post on one of his software libraries.

Hyperdimensional Computing

Filed under: Random Indexing,von Neumann Architecture — Patrick Durusau @ 7:05 pm

Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors by Pentti Kanerva.

Reflective Random Indexing and indirect inference… cites Kanerva as follows:

Random Indexing (RI) [cites omitted] has recently emerged as a scalable alternative to LSA for the derivation of spatial models of semantic distance from large text corpora. For a thorough introduction to Random Indexing and hyper-dimensional computing in general, see [Kanerva, this paper] [cite omitted].

Kanerva’s abstract:

The 1990s saw the emergence of cognitive models that depend on very high dimensionality and randomness. They include Holographic Reduced Representations, Spatter Code, Semantic Vectors, Latent Semantic Analysis, Context-Dependent Thinning, and Vector-Symbolic Architecture. They represent things in high-dimensional vectors that are manipulated by operations that produce new high-dimensional vectors in the style of traditional computing, in what is called here hyperdimensional computing on account of the very high dimensionality. The paper presents the main ideas behind these models, written as a tutorial essay in hopes of making the ideas accessible and even provocative. A sketch of how we have arrived at these models, with references and pointers to further reading, is given at the end. The thesis of the paper is that hyperdimensional representation has much to offer to students of cognitive science, theoretical neuroscience, computer science and engineering, and mathematics.

This one will take a while to read and digest but I will be posting on it and the further reading it cites in the not too distant future.

Introduction to Random Indexing

Filed under: Indexing,Indirect Inference,Random Indexing — Patrick Durusau @ 7:04 pm

Introduction to Random Indexing by Magnus Sahlgren.

I thought this would be useful alongside Reflective Random Indexing and indirect inference….

Just a small sample of what you will find:

Note that this methodology constitutes a radically different way of conceptualizing how context vectors are constructed. In the “traditional” view, we first construct the co-occurrence matrix and then extract context vectors. In the Random Indexing approach, on the other hand, we view the process backwards, and first accumulate the context vectors. We may then construct a cooccurrence matrix by collecting the context vectors as rows of the matrix.

I like non-traditional approaches. Some work (like random indexing) and some don’t.

What new/non-traditional approaches have you tried in the last week? We learn as much (if not more) from failure as success.

Powered by WordPress