Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 5, 2015

‘Outsiders’ Crack A 50-Year-Old Math Problem

Filed under: Mathematics,Matrix,Networks — Patrick Durusau @ 3:28 pm

‘Outsiders’ Crack A 50-Year-Old Math Problem by Erica Klarreich.

From the post:

In 2008, Daniel Spielman told his Yale University colleague Gil Kalai about a computer science problem he was working on, concerning how to “sparsify” a network so that it has fewer connections between nodes but still preserves the essential features of the original network.

Network sparsification has applications in data compression and efficient computation, but Spielman’s particular problem suggested something different to Kalai. It seemed connected to the famous Kadison-Singer problem, a question about the foundations of quantum physics that had remained unsolved for almost 50 years.

Over the decades, the Kadison-Singer problem had wormed its way into a dozen distant areas of mathematics and engineering, but no one seemed to be able to crack it. The question “defied the best efforts of some of the most talented mathematicians of the last 50 years,” wrote Peter Casazza and Janet Tremain of the University of Missouri in Columbia, in a 2014 survey article.

As a computer scientist, Spielman knew little of quantum mechanics or the Kadison-Singer problem’s allied mathematical field, called C*-algebras. But when Kalai, whose main institution is the Hebrew University of Jerusalem, described one of the problem’s many equivalent formulations, Spielman realized that he himself might be in the perfect position to solve it. “It seemed so natural, so central to the kinds of things I think about,” he said. “I thought, ‘I’ve got to be able to prove that.’” He guessed that the problem might take him a few weeks.

Instead, it took him five years. In 2013, working with his postdoc Adam Marcus, now at Princeton University, and his graduate student Nikhil Srivastava, now at the University of California, Berkeley, Spielman finally succeeded. Word spread quickly through the mathematics community that one of the paramount problems in C*-algebras and a host of other fields had been solved by three outsiders — computer scientists who had barely a nodding acquaintance with the disciplines at the heart of the problem.

Why all the excitement?


The proof of the Kadison-Singer problem implies that all the constructions in its dozen incarnations can, in principle, be carried out—quantum knowledge can be extended to full quantum systems, networks can be decomposed into electrically similar ones, matrices can be broken into simpler chunks. The proof won’t change what quantum physicists do, but it could have applications in signal processing, since it implies that collections of vectors used to digitize signals can be broken down into smaller frames that can be processed faster. The theorem “has potential to affect some important engineering problems,” Casazza said.

Just so you know, the same people who are saying it will be years before practical results emerge from this breakthrough are the same ones who assumed the answer to this problem was negative. 😉

I’m not saying techniques based on this work will be in JavaScript libraries next year but without trying, they never will be.

Enjoy!

I first saw this in a post by Lars Marius Garshol

March 3, 2015

The Matrix Cheatsheet

Filed under: Julia,Matrix,Numpy,R — Patrick Durusau @ 4:31 pm

The Matrix Cheatsheet by Sebastian Raschka.

Sebastian has created a spreadsheet of thirty (30) matrix tasks and compares the code for each in: MATLAB/Octave, Python NumPy, R, and Julia.

Given the prevalence of matrices in so many data science tasks, this can’t help but be useful.

A bit longer treatment can be found at: The Matrix Cookbook.

I first saw this in a tweet by Yhat, Inc.

October 1, 2014

Uncovering Community Structures with Initialized Bayesian Nonnegative Matrix Factorization

Filed under: Bayesian Data Analysis,Matrix,Social Graphs,Social Networks,Subgraphs — Patrick Durusau @ 3:28 pm

Uncovering Community Structures with Initialized Bayesian Nonnegative Matrix Factorization by Xianchao Tang, Tao Xu, Xia Feng, and, Guoqing Yang.

Abstract:

Uncovering community structures is important for understanding networks. Currently, several nonnegative matrix factorization algorithms have been proposed for discovering community structure in complex networks. However, these algorithms exhibit some drawbacks, such as unstable results and inefficient running times. In view of the problems, a novel approach that utilizes an initialized Bayesian nonnegative matrix factorization model for determining community membership is proposed. First, based on singular value decomposition, we obtain simple initialized matrix factorizations from approximate decompositions of the complex network’s adjacency matrix. Then, within a few iterations, the final matrix factorizations are achieved by the Bayesian nonnegative matrix factorization method with the initialized matrix factorizations. Thus, the network’s community structure can be determined by judging the classification of nodes with a final matrix factor. Experimental results show that the proposed method is highly accurate and offers competitive performance to that of the state-of-the-art methods even though it is not designed for the purpose of modularity maximization.

Some titles grab you by the lapels and say, “READ ME!,” don’t they? 😉

I found the first paragraph a much friendlier summary of why you should read this paper (footnotes omitted):

Many complex systems in the real world have the form of networks whose edges are linked by nodes or vertices. Examples include social systems such as personal relationships, collaborative networks of scientists, and networks that model the spread of epidemics; ecosystems such as neuron networks, genetic regulatory networks, and protein-protein interactions; and technology systems such as telephone networks, the Internet and the World Wide Web [1]. In these networks, there are many sub-graphs, called communities or modules, which have a high density of internal links. In contrast, the links between these sub-graphs have a fairly lower density [2]. In community networks, sub-graphs have their own functions and social roles. Furthermore, a community can be thought of as a general description of the whole network to gain more facile visualization and a better understanding of the complex systems. In some cases, a community can reveal the real world network’s properties without releasing the group membership or compromising the members’ privacy. Therefore, community detection has become a fundamental and important research topic in complex networks.

If you think of “the real world network’s properties” as potential properties for identification of a network as a subject or as properties of the network as a subject, the importance of this article becomes clearer.

Being able to speak of sub-graphs as subjects with properties can only improve our ability to compare sub-graphs across complex networks.

BTW, all the data used in this article is available for downloading: http://dx.doi.org/10.6084/m9.figshare.1149965

I first saw this in a tweet by Brian Keegan.

September 7, 2014

Matrix Methods & Applications (DRAFT)

Filed under: Mathematics,Matrix — Patrick Durusau @ 3:55 pm

Matrix Methods & Applications (DRAFT)

Stephen Boyd (Stanford) and Lieven Vandenberghe advise:

The textbook is still under very active development by Lieven Vandenberghe and Stephen Boyd, so be sure to download the newest version often. For now, we’ve posted a rough draft that does not include the exercises (which we’ll be adding). The first few chapters are in reasonable shape, but later ones are quite incomplete.

The 10 August 2014 draft has one hundred and twenty-two (122) pages so you can assume more material is coming.

I particularly like the “practical” suggested use cases.

The use cases create opportunities to illustrate the impact of data on supposedly “neutral” algorithms. Deeper knowledge of these algorithms will alert you to potential gaming of data that lies behind “neutral” processing of data.

Inspection of data is the equivalent of Mannie’s grandfather’s second rule: “Always cut cards.” (The Moon Is A Harsh Mistress)

Anyone who objects to inspection of data is hiding something. It may be their own incompetence with the data but you won’t know unless you inspect the data.

Results + algorithms + code + data = Maybe we will agree after inspection.

I first saw this in a tweet by fastml extra.

January 14, 2014

Algorithmic Music Discovery at Spotify

Filed under: Algorithms,Machine Learning,Matrix,Music,Music Retrieval,Python — Patrick Durusau @ 3:19 pm

Algorithmic Music Discovery at Spotify by Chris Johnson.

From the description:

In this presentation I introduce various Machine Learning methods that we utilize for music recommendations and discovery at Spotify. Specifically, I focus on Implicit Matrix Factorization for Collaborative Filtering, how to implement a small scale version using python, numpy, and scipy, as well as how to scale up to 20 Million users and 24 Million songs using Hadoop and Spark.

Among a number of interesting points, Chris points out differences between movie and music data.

One difference is that songs are consumed over and over again. Another is that users rate movies but “vote” by their streaming behavior on songs.*

While leads to Chris’ main point, implicit matrix factorization. Code. The source code page points to: Collaborative Filtering for Implicit Feedback Datasets by Yifan Hu, Yehuda Koren, and Chris Volinsky.

Scaling that process is represented in blocks for Hadoop and Spark.

* I suspect that “behavior” is more reliable than “ratings” from the same user. Reasoning ratings are more likely to be subject to social influences. I don’t have any research at my fingertips on that issue. Do you?

October 16, 2013

LIBMF: …

Filed under: Machine Learning,Matrix,Recommendation — Patrick Durusau @ 6:04 pm

LIBMF: A Matrix-factorization Library for Recommender Systems by Machine Learning Group at National Taiwan University.

From the webpage:

LIBMF is an open source tool for approximating an incomplete matrix using the product of two matrices in a latent space. Matrix factorization is commonly used in collaborative filtering. Main features of LIBMF include

  • In addition to the latent user and item features, we add user bias, item bias, and average terms for better performance.
  • LIBMF can be parallelized in a multi-core machine. To make our package more efficient, we use SSE instructions to accelerate the vector product operations.

    For a data sets of 250M ratings, LIBMF takes less then eight minutes to converge to a reasonable level.

  • Download

    The current release (Version 1.0, Sept 2013) of LIBMF can be obtained by downloading the zip file or tar.gz file.

    Please read the COPYRIGHT notice before using LIBMF.

    Documentation

    The algorithms of LIBMF is described in the following paper.

    Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin. A Fast Parallel SGD for Matrix Factorization in Shared Memory Systems. Proceedings of ACM Recommender Systems 2013.

    See README in the package for the practical use.

Being curious about what “practical use” would be in the README, ;-), I discovered a demo data set. And basic instructions for use.

For the details of application for recommendations, see the paper.

June 4, 2013

Understanding matrices intuitively, part 1

Filed under: Mathematics,Matrix — Patrick Durusau @ 10:27 am

Understanding matrices intuitively, part 1 by William Gould.

From the post:

I want to show you a way of picturing and thinking about matrices. The topic for today is the square matrix, which we will call A. I’m going to show you a way of graphing square matrices, although we will have to limit ourselves to the 2 x 2 case. That will be, as they say, without loss of generality. The technique I’m about to show you could be used with 3 x 3 matrices if you had a better 3-dimensional monitor, and as will be revealed, it could be used on 3 x 2 and 2 x 3 matrices, too. If you had more imagination, we could use the technique on 4 x 4, 5 x 5, and even higher-dimensional matrices.

Matrices are quite common in information retrieval texts.

William’s post is an uncommonly good explanation of how to think about and picture matrices.

I first saw this in Christophe Lalanne’s A bag of tweets / May 2013.

April 16, 2013

The non-negative matrix factorization toolbox for biological data mining

Filed under: Bioinformatics,Data Mining,Matrix — Patrick Durusau @ 4:20 pm

The non-negative matrix factorization toolbox for biological data mining by Yifeng Li and Alioune Ngom. (Source Code for Biology and Medicine 2013, 8:10 doi:10.1186/1751-0473-8-10)

From the post:

Background: Non-negative matrix factorization (NMF) has been introduced as an important method for mining biological data. Though there currently exists packages implemented in R and other programming languages, they either provide only a few optimization algorithms or focus on a specific application field. There does not exist a complete NMF package for the bioinformatics community, and in order to perform various data mining tasks on biological data.

Results: We provide a convenient MATLAB toolbox containing both the implementations of various NMF techniques and a variety of NMF-based data mining approaches for analyzing biological data. Data mining approaches implemented within the toolbox include data clustering and bi-clustering, feature extraction and selection, sample classification, missing values imputation, data visualization, and statistical comparison.

Conclusions: A series of analysis such as molecular pattern discovery, biological process identification, dimension reduction, disease prediction, visualization, and statistical comparison can be performed using this toolbox.

Written in a bioinformatics context but also used in text data mining (Enron emails), spectral analysis and other data mining fields. (See Non-negative matrix factorization)

September 25, 2012

Twitter’s Scalding and Algebird: Matrix and Lighweight Algebra Library

Filed under: Algebird,Matrix,Scalding — Patrick Durusau @ 3:11 pm

Twitter’s Scalding and Algebird: Matrix and Lighweight Algebra Library by Alex Popescu.

Alex points out:

  1. Scalding now includes a type-safe Matrix API
  2. In the familiar Fields API, we’ve added the ability to add type information to fields which allows scalding to pick up Ordering instances so that grouping on almost any scala collection becomes easy.
  3. Algebird is our lightweight abstract algebra library for Scala and is targeted for building aggregation systems (such as Storm).

Of the three, I am going to take a look at Algebird first.

November 28, 2011

3 surprising facts about the computation of scalar products

Filed under: Mathematics,Matrix — Patrick Durusau @ 7:06 pm

3 surprising facts about the computation of scalar products by Daniel Lemire.

From the post:

The speed of many algorithms depends on how quickly you can multiply matrices or compute distances. In turn, these computations depend on the scalar product. Given two arrays such as (1,2) and (5,3), the scalar product is the sum of products 1 × 5 + 2 × 3. We have strong incentives to compute the scalar product as quickly as possible.

Sorry, can’t tell you the three things because that would ruin the surprise. 😉 See Daniel’s blog for the details.

September 18, 2011

Approaching optimality for solving SDD systems

Filed under: Algorithms,Mathematics,Matrix — Patrick Durusau @ 7:29 pm

In October 2010, this paper was presented by the authors.

Approaching optimality for solving SDD systems by Ioannis Koutis, Gary L. Miller, and Richard Peng.

Public reports on that paper can be found at: A Breakthrough in Algorithm Design in the September 2011 issue of CACM and PC Pro in: Algorithm sees massive jump in complex number crunching.

The claim is that the new approach will be a billion times faster than traditional techniques.

In February of 2011, the authors have posted a new and improved version of their algorithm in:

A nearly-mlogn time solver for SDD linear systems.

Koutis has written a MATLAB implementation at: CMG: Combinatorial Multigrid

For further background, see: Combinatorial Preconditioning, sparsification, local clustering, low-stretch trees, etc. by Spielman, one of the principal researchers in this area.

The most obvious application in topic maps would be recommender systems that bring possible merges to a topic map author’s attention or even perform merging on specified conditions. (If the application doesn’t seem obvious, read the post I refer to in: Text Feature Extraction (tf-idf) – Part 1 , again. Will also give you some ideas about scaleable merging tests as well.)

Years ago Lars Marius told me that topic maps needed to scale on laptops to be successful. It looks like algorithms are catching up to meet his requirement.

September 2, 2011

Welcome to The Matrix Factorization Jungle

Filed under: Data Mining,Matrix — Patrick Durusau @ 7:55 pm

Welcome to The Matrix Factorization Jungle [ A living documention on the state of the art algorithms dedicated to matrix factorization ]

From the webpage:

Matrix Decompositions has a long history and generally centers around a set of known factorizations such as LU, QR, SVD and eigendecompositions. With the advent of new methods based on random projections and convex optimization that started in part in the compressive sensing literature, we are seeing a surge of very diverse algorithms dedicated to many different kinds of matrix factorizations with constraints based on rank, positivity, sparsity,… As a result of this large increase in interest, I have decided to keep a list of them here following the success of the big picture in compressive sensing.

If you are unfamiliar with the use of matrices in data mining, consider Non-negative matrix factorization and the examples cited under Text mining.

April 11, 2011

University of Florida Sparse Matrix Collection

Filed under: Algorithms,Computational Geometry,Graphs,Matrix,Networks — Patrick Durusau @ 5:49 am

University of Florida Sparse Matrix Collection

It’s not what you think. Well, it is but it is so much more. You really have to see the images at this site.

Abstract (from the paper by the same title):

We describe the University of Florida Sparse Matrix Collection, a large and actively growing set of sparse matrices that arise in real applications. The Collection is widely used by the numerical linear algebra community for the development and performance evaluation of sparse matrix algorithms. It allows for robust and repeatable experiments: robust because performance results with artificially-generated matrices can be misleading, and repeatable because matrices are curated and made publicly available in many formats. Its matrices cover a wide spectrum of domains, include those arising from problems with underlying 2D or 3D geometry (as structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, power networks, and other networks and graphs). We provide software for accessing and managing the Collection, from MATLAB, Mathematica, Fortran, and C, as well as an online search capability. Graph visualization of the matrices is provided, and a new multilevel coarsening scheme is proposed to facilitate this task.

A Java viewer for matrices is also found here.

February 8, 2011

The Matrix Cookbook – Post

Filed under: Matrix — Patrick Durusau @ 9:50 am

The Matrix Cookbook by Bob Carpenter (LingPipe Blog) points to a “matrix cheatsheet” by K. B. Petersen and M. S. Pedersen.

Good for those of you starting to use matrix methods for IR and hence building topic maps.

December 11, 2010

Decomposer

Filed under: Matrix,Search Engines,Vectors — Patrick Durusau @ 1:19 pm

Decomposer

From the website:

Matrix algebra underpins the way many Big Data algorithms and data structures are composed: full-text search can be viewed as doing matrix multiplication of the term-document matrix by the query vector (giving a vector over documents where the components are the relevance score), computing co-occurrences in a collaborative filtering context (people who viewed X also viewed Y, or ratings-based CF like the Netflix Prize contest) is taking the squaring the user-item interation matrix, calculating users who are k-degrees separated from each other in a social network or web-graph can be found by looking at the k-fold product of the graph adjacency matrix, and the list goes on (and these are all cases where the linear structure of the matrix is preserved!)
….
Currently implemented: Singular Value Decomposition using the Asymmetric Generalized Hebbian Algorithm outlined in Genevieve Gorrell & Brandyn Webb’s paper and there is a Lanczos implementation, both single-threaded, and in the contrib/hadoop subdirectory, as a hadoop map-reduce (series of) job(s). Coming soon: stochastic decomposition.

This code is in the process of being absorbed into the Apache Mahout Machine Learning Project.

Useful in learning to use search technology but also for recognizing at a very fundamental level, the limitations of that technology.

Document and query vectors are constructed without regard to the semantics of their components.

Using co-occurrence, for example, doesn’t give a search engine greater access to the semantics of the terms in question.

It simply makes the vector longer and so matches are less frequent and hopefully, less frequent = more precise.

That may or may not be the case. It also doesn’t account for case where the vectors are different but the subject in question is the same.

Powered by WordPress