Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 18, 2012

Using BerkeleyDB to Create a Large N-gram Table

Filed under: BerkeleyDB,N-Gram,Natural Language Processing,Wikipedia — Patrick Durusau @ 3:16 pm

Using BerkeleyDB to Create a Large N-gram Table by Richard Marsden.

From the post:

Previously, I showed you how to create N-Gram frequency tables from large text datasets. Unfortunately, when used on very large datasets such as the English language Wikipedia and Gutenberg corpora, memory limitations limited these scripts to unigrams. Here, I show you how to use the BerkeleyDB database to create N-gram tables of these large datasets.

Large datasets such as the Wikipedia and Gutenberg English language corpora cannot be used to create N-gram frequency tables using the previous script due to the script’s large in-memory requirements. The solution is to create the frequency table as a disk-based dataset. For this, the BerkeleyDB database in key-value mode is ideal. This is an open source “NoSQL” library which supports a disk based database and in-memory caching. BerkeleyDB can be downloaded from the Oracle website, and also ships with a number of Linux distributions, including Ubuntu. To use BerkeleyDB from Python, you will need the bsddb3 package. This is included with Python 2.* but is an additional download for Python 3 installations.

Richard promises to make the resulting data sets available as an Azure service. Sample code, etc, will be posted to his blog.

Another Wikipedia based analysis.

April 11, 2012

Calculating Word and N-Gram Statistics from the Gutenberg Corpus

Filed under: Gutenberg Corpus,N-Gram,NLTK,Statistics — Patrick Durusau @ 6:16 pm

Calculating Word and N-Gram Statistics from the Gutenberg Corpus by Richard Marsden.

From the post:

Following on from the previous article about scanning text files for word statistics, I shall extend this to use real large corpora. First we shall use this script to create statistics for the entire Gutenberg English language corpus. Next I shall do the same with the entire English language Wikipedia.

A “get your feet wet” sort of exercise with the script included.

The Gutenberg project isn’t “big data” but it is more than your usual inbox.

Think of it as learning about the data set for application of more sophisticated algorithms.

Powered by WordPress