Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 17, 2011

Faster Approximate Pattern Matching in Compressed Repetitive Texts

Filed under: Pattern Matching — Patrick Durusau @ 8:13 pm

Faster Approximate Pattern Matching in Compressed Repetitive Texts by Travis Gagie, Pawel Gawrychowski, and Simon J. Puglisi.

Abstract:

Motivated by the imminent growth of massive, highly redundant genomic databases, we study the problem of compressing a string database while simultaneously supporting fast random access, substring extraction and pattern matching to the underlying string(s). Bille et al. (2011) recently showed how, given a straight-line program with $r$ rules for a string $s$ of length $n$, we can build an \Oh\({r}\)-word data structure that allows us to extract any substring \(s [i..j]\) in $\Oh{\log n + j – i}$ time. They also showed how, given a pattern $p$ of length $m$ and an edit distance (k \leq m), their data structure supports finding all \occ approximate matches to $p$ in $s$ in $\Oh{r (\min (m k, k^4 + m) + \log n) + \occ}$ time. Rytter (2003) and Charikar et al. (2005) showed that $r$ is always at least the number $z$ of phrases in the LZ77 parse of $s$, and gave algorithms for building straight-line programs with $\Oh{z \log n}$ rules. In this paper we give a simple $\Oh{z \log n}$-word data structure that takes the same time for substring extraction but only $\Oh{z (\min (m k, k^4 + m)) + \occ}$ time for approximate pattern matching.

It occurs to me that this could be useful for bioinformatic applications of topic maps that map between data sets and literature.

Interesting that the authors mention redundancy in web crawls. I suspect that there are a number of domains that have highly repetitive data should we choose to look at them that way. Markup documents for example.

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress