Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 1, 2014

Uncovering Community Structures with Initialized Bayesian Nonnegative Matrix Factorization

Filed under: Bayesian Data Analysis,Matrix,Social Graphs,Social Networks,Subgraphs — Patrick Durusau @ 3:28 pm

Uncovering Community Structures with Initialized Bayesian Nonnegative Matrix Factorization by Xianchao Tang, Tao Xu, Xia Feng, and, Guoqing Yang.

Abstract:

Uncovering community structures is important for understanding networks. Currently, several nonnegative matrix factorization algorithms have been proposed for discovering community structure in complex networks. However, these algorithms exhibit some drawbacks, such as unstable results and inefficient running times. In view of the problems, a novel approach that utilizes an initialized Bayesian nonnegative matrix factorization model for determining community membership is proposed. First, based on singular value decomposition, we obtain simple initialized matrix factorizations from approximate decompositions of the complex network’s adjacency matrix. Then, within a few iterations, the final matrix factorizations are achieved by the Bayesian nonnegative matrix factorization method with the initialized matrix factorizations. Thus, the network’s community structure can be determined by judging the classification of nodes with a final matrix factor. Experimental results show that the proposed method is highly accurate and offers competitive performance to that of the state-of-the-art methods even though it is not designed for the purpose of modularity maximization.

Some titles grab you by the lapels and say, “READ ME!,” don’t they? 😉

I found the first paragraph a much friendlier summary of why you should read this paper (footnotes omitted):

Many complex systems in the real world have the form of networks whose edges are linked by nodes or vertices. Examples include social systems such as personal relationships, collaborative networks of scientists, and networks that model the spread of epidemics; ecosystems such as neuron networks, genetic regulatory networks, and protein-protein interactions; and technology systems such as telephone networks, the Internet and the World Wide Web [1]. In these networks, there are many sub-graphs, called communities or modules, which have a high density of internal links. In contrast, the links between these sub-graphs have a fairly lower density [2]. In community networks, sub-graphs have their own functions and social roles. Furthermore, a community can be thought of as a general description of the whole network to gain more facile visualization and a better understanding of the complex systems. In some cases, a community can reveal the real world network’s properties without releasing the group membership or compromising the members’ privacy. Therefore, community detection has become a fundamental and important research topic in complex networks.

If you think of “the real world network’s properties” as potential properties for identification of a network as a subject or as properties of the network as a subject, the importance of this article becomes clearer.

Being able to speak of sub-graphs as subjects with properties can only improve our ability to compare sub-graphs across complex networks.

BTW, all the data used in this article is available for downloading: http://dx.doi.org/10.6084/m9.figshare.1149965

I first saw this in a tweet by Brian Keegan.

October 8, 2013

Subgraph Frequencies

Filed under: Graphs,Subgraphs — Patrick Durusau @ 4:18 pm

Subgraph Frequencies by Johan Ugander.

From the webpage:

In our upcoming paper Subgraph Frequencies: Mapping the Empirical and Extremal Geography of Large Graph Collections” (PDF), to appear in May 2013 at the 22nd ACM International World Wide Web Conference, we found that there are fundamental combinatorial constraints governing the frequency of subgraphs that constrain all graphs in previously undocumented ways. We also found that there are empirical properties of social graphs that relegate their subgraph frequencies to a seriously restricted subspace of the full feasible space. Together, we present two complementary frameworks that shed light on a fundamental question pertaining to social graphs:

What properties of social graphs are ‘social’ properties and what properties are ‘graph’ properties?

We contribute two frameworks for analyzing this question: one characterizing extremal properties of all graphs and one characterizing empirical properties of social graphs. Together the two frameworks offer a direct contribution to one of the most well-known observations about social graphs: the tendency of social relationships to close triangles, and the relative infrequency of what is sometimes called the ‘forbidden triad’: three people with two social relationships between them, but one absent relationship. Our frameworks show that the frequency of this ‘forbidden triad’ is non-trivially restricted not just in social graphs, but in all graphs.

Harnessing our results more generally, we are in fact able to show that almost all k node subgraphs have a frequency that is non-trivially bounded. Thus, there is an extent to which almost all subgraphs are mathematically ‘forbidden’ from occurring beyond a certain frequency.

Interesting to see “discovery” of “empirical properties” of social graphs.

What empirical properties will you discover while processing a social graph with graph software?

October 16, 2012

Core & Peel Algorithm

Filed under: Graphs,Networks,Subgraphs — Patrick Durusau @ 5:04 am

Detecting dense communities in large social and information networks with the Core & Peel algorithm by Marco Pellegrini, Filippo Geraci, Miriam Baglioni.

Abstract:

Detecting and characterizing dense subgraphs (tight communities) in social and information networks is an important exploratory tool in social network analysis. Several approaches have been proposed that either (i) partition the whole network into clusters, even in low density region, or (ii) are aimed at finding a single densest community (and need to be iterated to find the next one). As social networks grow larger both approaches (i) and (ii) result in algorithms too slow to be practical, in particular when speed in analyzing the data is required. In this paper we propose an approach that aims at balancing efficiency of computation and expressiveness and manageability of the output community representation. We define the notion of a partial dense cover (PDC) of a graph. Intuitively a PDC of a graph is a collection of sets of nodes that (a) each set forms a disjoint dense induced subgraphs and (b) its removal leaves the residual graph without dense regions. Exact computation of PDC is an NP-complete problem, thus, we propose an efficient heuristic algorithms for computing a PDC which we christen Core and Peel. Moreover we propose a novel benchmarking technique that allows us to evaluate algorithms for computing PDC using the classical IR concepts of precision and recall even without a golden standard. Tests on 25 social and technological networks from the Stanford Large Network Dataset Collection confirm that Core and Peel is efficient and attains very high precison and recall.

Great name for an algorithm, marred somewhat by the long paper title.

If subgraphs or small groups in a network are among your subjects, take the time to review this new graph exploration technique.

June 12, 2012

Predicting link directions via a recursive subgraph-based ranking

Filed under: Graphs,Prediction,Ranking,Subgraphs — Patrick Durusau @ 1:13 pm

Predicting link directions via a recursive subgraph-based ranking by Fangjian Guo, Zimo Yang, and Tao Zhou.

Abstract:

Link directions are essential to the functionality of networks and their prediction is helpful towards a better knowledge of directed networks from incomplete real-world data. We study the problem of predicting the directions of some links by using the existence and directions of the rest of links. We propose a solution by first ranking nodes in a specific order and then predicting each link as stemming from a lower-ranked node towards a higher-ranked one. The proposed ranking method works recursively by utilizing local indicators on multiple scales, each corresponding to a subgraph extracted from the original network. Experiments on real networks show that the directions of a substantial fraction of links can be correctly recovered by our method, which outperforms either purely local or global methods.

This paper focuses mostly on prediction of direction of links, relying on other research for the question of link existence.

I mention it because predicting links and their directions will be important for planning graph database deployments in particular.

It will be a little late to find out when under full load that other modeling choices should have been made. (It is usually under “full load” conditions when retrospectives on modeling choices come up.)

November 23, 2010

Mining Interesting Subgraphs….

Filed under: Data Mining,Sampling,Subgraphs — Patrick Durusau @ 7:00 am

Mining Interesting Subgraphs by Output Space Sampling

Mohammad Al Hasan’s dissertation was the winner of the SIGKDD Ph.D. Dissertation Award.

From the dissertation:

Output space sampling is an entire paradigm shift in frequent pattern mining (FPM) that holds enormous promise. While traditional FPM strives for completeness, OSS targets to obtain a few interesting samples. The definition of interestingness can be very generic, so user can sample patterns from different target distributions by choosing different interestingness functions. This is very beneficial as mined patterns are subject to subsequent use in various knowledge discovery tasks, like classification, clustering, outlier detection, etc. and the interestingness score of a pattern varies for various tasks. OSS can adapt to this requirement just by changing the interestingness function. OSS also solves pattern redundancy problem by finding samples that are very different from each other. Note that, pattern redundancy hurts any knowledge based system that builds metrics based on the structural similarity of the patterns.

Nice to see recognition that for some data sets we don’t need (or require) full enumeration of all occurrences.

Something topic map advocates need to remember when proselytizing for topic maps.

The goal is not all the information known about a subject.

The goal is all the information a user wants about a subject.

Not the same thing.

Questions:

  1. What criteria of “interestingness” would you apply in gathering data for easy access by your patrons? (3-5 pages, no citations)
  2. How would you use this technique for authoring and/or testing a topic map? (3-5 pages, not citations. Think of “testing” a topic map as its representativeness of a particular data collection.
  3. Bibliography of material citing the paper or applying this technique.

October 12, 2010

Fast Discovery of Reliable k-terminal Subgraphs

Filed under: Bioinformatics,Graphs,Subgraphs — Patrick Durusau @ 6:20 am

Fast Discovery of Reliable k-terminal Subgraphs Authors: Melissa Kasari, Hannu Toivonen, Petteri Hintsanen

Abstract:

We present a novel and efficient algorithm for solving the most reliable subgraph problem with multiple query nodes on undirected random graphs. Reliable subgraphs are useful for summarizing connectivity between given query nodes. Formally, we are given a graph G?=?(V, E), a set of query (or terminal) nodes Q???V, and a positive integer B. The objective is to find a subgraph H???G containing Q, such that H has at most B edges, and the probability that H is connected is maximized. Previous algorithms for the problem are either computationally demanding, or restricted to only two query nodes. Our algorithm extends a previous algorithm to handle k query nodes, where 2???k???|V|. We demonstrate experimentally the usefulness of reliable k-terminal subgraphs, and the accuracy, efficiency and scalability of the proposed algorithm on real graphs derived from public biological databases.

Uses biological data from the Biomine project, to demonstrate a solution to the reliable subgraph problem. Nodes are judged on the basis of their network “reliability,” that is connectedness to the query nodes.

Wonder what it would like to have a quality of the nodes in addition to “connectedness” as the basis for the reliable subgraph calculation?

Also available at CiteSeer, Fast Discovery of Reliable k-terminal Subgraphs

Powered by WordPress