Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 9, 2016

ACHE Focused Crawler

Filed under: ElasticSearch,Record Linkage,Webcrawler — Patrick Durusau @ 4:51 pm

ACHE Focused Crawler

From the webpage:

ACHE is an implementation of a focused crawler. A focused crawler is a web crawler that collects Web pages that satisfy some specific property. ACHE differs from other crawlers in the sense that it includes page classifiers that allows it to distinguish between relevant and irrelevant pages in a given domain. The page classifier can be from a simple regular expression (that matches every page that contains a specific word, for example), to a sophisticated machine-learned classification model. ACHE also includes link classifiers, which allows it decide the best order in which the links should be downloaded in order to find the relevant content on the web as fast as possible, at the same time it doesn’t waste resources downloading irrelevant content.

ache-logo-400

The inclusion of machine learning (Weka) and robust indexing (ElasticSearch) means this will take more than a day or two to explore.

Certainly well suited to exploring all the web accessible resources on narrow enough topics.

I was thinking about doing a “9 Million Pages of Donald Trump,” (think Nine Billion Names of God) but a quick sanity check showed there are already more than 230 million such pages.

Perhaps by the election I could produce “9 Million Pages With Favorable Comments About Donald Trump.” Perhaps if I don’t dedupe the pages found by searching it would go that high.

Other topics for comprehensive web searching come to mind?

PS: The many names of record linkage come to mind. I think I have thirty (30) or so.

December 12, 2015

Previously Unknown Hip Replacement Side Effect: Web Crawler Writing In Python

Filed under: Python,Search Engines,Searching,Webcrawler — Patrick Durusau @ 8:02 pm

Crawling the web with Python 3.x by Doug Mahugh.

From the post:

These days, most everyone is familiar with the concept of crawling the web: a piece of software that systematically reads web pages and the pages they link to, traversing the world-wide web. It’s what Google does, and countless tech firms crawl web pages to accomplish tasks ranging from searches to archiving content to statistical analyses and so on. Web crawling is a task that has been automated by developers in every programming language around, many times — for example, a search for web crawling source code yields well over a million hits.

So when I recently came across a need to crawl some web pages for a project I’ve been working on, I figured I could just go find some source code online and hack it into what I need. (Quick aside: the project is a Python library for managing EXIF metadata on digital photos. More on that in a future blog post.)

But I spent a couple of hours searching and playing with the samples I found, and didn’t get anywhere. Mostly because I’m working in Python version 3, and the most popular Python web crawling code is Scrapy, which is only available for Python 2. I found a few Python 3 samples, but they all seemed to be either too trivial (not avoiding re-scanning the same page, for example) or too needlessly complex. So I decided to write my own Python 3.x web crawler, as a fun little learning exercise and also because I need one.

In this blog post I’ll go over how I approached it and explain some of the code, which I posted on GitHub so that others can use it as well.

Doug has been writing publicly about his hip replacement surgery so I don’t think this has any privacy issues. 😉

I am interested to see what he writes once he is fully recovered.

My contacts at the American Medical Association disavow any knowledge of hip replacement surgery driving patients to write in Python and/or to write web crawlers.

I suppose there could be liability implications, especially for C/C++ programmers who lose their programming skills except for Python following such surgery.

Still, glad to hear Doug has been making great progress and hope that it continues!

December 11, 2014

Wouldn’t it be fun to build your own Google?

Wouldn’t it be fun to build your own Google? by Martin Kleppmann.

Martin writes:

Imagine you had your own copy of the entire web, and you could do with it whatever you want. (Yes, it would be very expensive, but we’ll get to that later.) You could do automated analyses and surface the results to users. For example, you could collate the “best” articles (by some definition) written on many different subjects, no matter where on the web they are published. You could then create a tool which, whenever a user is reading something about one of those subjects, suggests further reading: perhaps deeper background information, or a contrasting viewpoint, or an argument on why the thing you’re reading is full of shit.

Unfortunately, at the moment, only Google and a small number of other companies that have crawled the web have the resources to perform such analyses and build such products. Much as I believe Google try their best to be neutral, a pluralistic society requires a diversity of voices, not a filter bubble controlled by one organization. Surely there are people outside of Google who want to work on this kind of thing. Many a start-up could be founded on the basis of doing useful things with data extracted from a web crawl.

He goes on to discuss current search efforts such a Common Crawl and Wayfinder before hitting full stride with his suggestion for a distributed web search engine. Painting in the broadest of strokes, Martin makes it sound almost plausible to contemplate such an effort.

While conceding the technological issues would be many, it is contended that the payoff would be immense, but in ways we won’t know until it is available. I suspect Martin is right but if so, then we should be able to see a similar impact from Common Crawl. Yes?

Not to rain on a parade I would like to join, but extracting value from a web crawl like Common Crawl is not a guaranteed thing. A more complete crawl of the web only multiplies those problems, it doesn’t make them easier to solve.

On the whole I think the idea of a distributed crawl of the web is a great idea, but while that develops, we best hone our skills at extracting value from the partial crawls that already exist.

June 8, 2014

Laboratory for Web Algorithmics

Filed under: Algorithms,Search Algorithms,Search Engines,Webcrawler — Patrick Durusau @ 2:53 pm

Laboratory for Web Algorithmics

From the homepage:

The Laboratory for Web Algorithmics (LAW) was established in 2002 at the Dipartimento di Scienze dell’Informazione (now merged in the Computer Science Department) of the Università degli studi di Milano.

The LAW is part of the NADINE FET EU project.

Research at LAW concerns all algorithmic aspects of the study of the web and of social networks. More in detail…

The details include:

  • High-performance web crawling: Including an open source web crawler
  • Compression of web graphs and social networks: compression of web crawling results
  • Analysis of web graphs and social networks: research and algorithms for exploration of web graphs

Deeply impressive project and one with several papers and resources that I will be covering in more detail in future posts.

I first saw this in a tweet by Network Fact.

February 23, 2014

Common Crawl’s Move to Nutch

Filed under: Nutch,Search Engines,Webcrawler — Patrick Durusau @ 2:30 pm

Common Crawl’s Move to Nutch by Jordan Mendelson.

From the post:

Last year we transitioned from our custom crawler to the Apache Nutch crawler to run our 2013 crawls as part of our migration from our old data center to the cloud.

Our old crawler was highly tuned to our data center environment where every machine was identical with large amounts of memory, hard drives and fast networking.

We needed something that would allow us to do web-scale crawls of billions of webpages and would work in a cloud environment where we might run on a heterogenous machines with differing amounts of memory, CPU and disk space depending on the price plus VMs that might go up and down and varying levels of networking performance.

Before you hand roll a custom web crawler, you should read this short but useful report on the Common Crawl experience with Nutch.

February 9, 2014

Snowden Used Low-Cost Tool to Best N.S.A.

Filed under: Cybersecurity,Humor,NSA,Web Scrapers,Webcrawler — Patrick Durusau @ 4:47 pm

Snowden Used Low-Cost Tool to Best N.S.A. by David E. Sanger and Eric Schmitt.

From the post:

Intelligence officials investigating how Edward J. Snowden gained access to a huge trove of the country’s most highly classified documents say they have determined that he used inexpensive and widely available software to “scrape” the National Security Agency’s networks, and kept at it even after he was briefly challenged by agency officials.

Using “web crawler” software designed to search, index and back up a website, Mr. Snowden “scraped data out of our systems” while he went about his day job, according to a senior intelligence official. “We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated.”

The findings are striking because the N.S.A.’s mission includes protecting the nation’s most sensitive military and intelligence computer systems from cyberattacks, especially the sophisticated attacks that emanate from Russia and China. Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.

Moreover, Mr. Snowden succeeded nearly three years after the WikiLeaks disclosures, in which military and State Department files, of far less sensitivity, were taken using similar techniques.

Mr. Snowden had broad access to the N.S.A.’s complete files because he was working as a technology contractor for the agency in Hawaii, helping to manage the agency’s computer systems in an outpost that focuses on China and North Korea. A web crawler, also called a spider, automatically moves from website to website, following links embedded in each document, and can be programmed to copy everything in its path.
….

A highly amusing article that explains the ongoing Snowden leaks and perhaps a basis for projecting when Snowden leaks will stop….not any time soon! The suspicion is that Snowden may have copied 1.7 million files.

Not with drag-n-drop but using a program!

I’m sure that was news to a lot of managers in both industry and government.

Now of course the government is buttoning up all the information (allegedly), which will hinder access to materials by those with legitimate need.

It’s one thing to have these “true to your school” types in management at agencies where performance isn’t expected or tolerated. But in a spy agency that you are trying to use to save your citizens from themselves, that’s just self-defeating.

The real solution for the NSA and any other agency where you need high grade operations is to institute an Apache meritocracy process to manage both projects and to fill management slots. It would not be open source or leak to the press, at least not any more than it does now.

The upside would be the growth, over a period of years, of highly trained and competent personnel who would institute procedures that assisted with their primary functions, not simply to enable the hiring of contractors.

It’s worth a try, the NSA could hardly do worse than it is now.

PS: I do think the NSA is violating the U.S. Constitution but the main source of my ire is their incompetence in doing so. Gathering up phone numbers because they are easy to connect for example. Drunks under the streetlight.

PPS: This is also a reminder that it isn’t the cost/size of the tool but the effectiveness with which it is used that makes a real difference.

January 7, 2014

Small Crawl

Filed under: Common Crawl,Data,Webcrawler,WWW — Patrick Durusau @ 7:40 pm

meanpath Jan 2014 Torrent – 1.6TB of crawl data from 115m websites

From the post:

October 2012 was the official kick off date for development of meanpath – our source code search engine. Our goal was to crawl as much of the web as we could using mostly open source software and a decent (although not Google level) financial investment. Outside of many substantial technical challenges, we also needed to acquire a sizeable list of seed domains as the starting block for our crawler. Enter Common Crawl which is an open crawl of the web that can be accessed and analysed by everyone. Of specific interest to us was the Common Crawl URL Index which we combined with raw domain zone files and domains from the Internet Census 2012 to create our master domain list.

We are firm supporters of open access to information which is why we have chosen to release a free crawl of over 115 million sites. This index contains only the front page HTML, robots.txt, favicons, and server headers of every crawlable .com, .net, .org, .biz, .info, .us, .mobi, and .xxx that were in the 2nd of January 2014 zone file. It does not execute or follow JavaScript or CSS so is not 100% equivalent to what you see when you click on view source in your browser. The crawl itself started at 2:00am UTC 4th of January 2014 and finished the same day.

Get Started:
You can access the meanpath January 2014 Front Page Index in two ways:

  1. Bittorrent – We have set up a number of seeds that you can download from using this descriptor. Please seed if you can afford the bandwidth and make sure you have 1.6TB of disk space free if you plan on downloading the whole crawl.
  2. Web front end – If you are not interested in grappling with the raw crawl files you can use our web front end to do some sample searches.

Data Set Statistics:

  1. 149,369,860 seed domains. We started our crawl with a full zone file list of all domains in the .com (112,117,307), .net (15,226,877), .org (10,396,351), .info (5,884,505), .us (1,804,653), .biz (2,630,676), .mobi (1,197,682) and .xxx (111,809) top level domains (TLD) for a total of 149,369,860 domains. We have a much larger set of domains that cover all TLDs but very few allow you to download a zone file from the registrar so we cannot guarantee 100% coverage. For statistical purposes having a defined 100% starting point is necessary.
  2. 115,642,924 successfully crawled domains. Of the 149,369,860 domains only 115,642,924 were able to be crawled which is a coverage rate of 77.42%
  3. 476 minutes of crawling. It took us a total of 476 minutes to complete the crawl which was done in 5 passes. If a domain could not be crawled in the first pass we tried 4 more passes before giving up (those excluded by robots.txt are not retried). The most common reason domains are not able to be crawled is a lack of any valid A record for domain.com or www.domain.com
  4. 1,500GB of uncompressed data. This has been compressed down to 352.40gb using gzip for ease of download.

I just scanned the Net for 2TB hard drives and the average runs between $80 and $100. There doesn’t seem to be much difference between internal and external.

The only issue I foresee is that some ISPs limit downloads. You can always tunnel to another box using SSH but that requires enough storage on the other box as well.

Be sure to check out meanpath’s search capabilities.

Perhaps the day of boutique search engines is getting closer!

October 20, 2013

Crawl Anywhere

Filed under: Search Engines,Search Interface,Solr,Webcrawler — Patrick Durusau @ 5:59 pm

Crawl Anywhere 4.0.0-release-candidate available

From the Overview:

What is Crawl Anywhere?

Crawl Anywhere allows you to build vertical search engines. Crawl Anywhere includes :   

  • a Web Crawler with a powerful Web user interface
  • a document processing pipeline
  • a Solr indexer
  • a full featured and customizable search application

You can see the diagram of a typical use of all components in this diagram.

Why was Crawl Anywhere created?

Crawl Anywhere was originally developed to index in Apache Solr 5400 web sites (more than 10.000.000 pages) for the Hurisearch search engine: http://www.hurisearch.org/. During this project, various crawlers were evaluated (heritrix, nutch, …) but one key feature was missing : a user friendly web interface to manage Web sites to be crawled with their specific crawl rules. Mainly for this raison, we decided to develop our own Web crawler. Why did we choose the name "Crawl Anywhere" ? This name may appear a little over stated, but crawl any source types (Web, database, CMS, …) is a real objective and Crawl Anywhere was designed in order to easily implement new source connectors.

Can you create a better search corpus for some domain X than Google?

Less noise and trash?

More high quality content?

Cross referencing? (Not more like this but meaningful cross-references.)

There is only one way to find out!

Crawl Anywhere will help you with the technical side of creating a search corpus.

What it won’t help with is developing the strategy to build and maintain such a corpus.

Interested in how you go beyond creating a subject specific list of resources?

A list that leaves a reader to sort though the chaff. Time and time again.

Pointers, suggestions, comments?

May 27, 2013

Crawl-Anywhere

Filed under: Search Engines,Searching,Solr,Webcrawler — Patrick Durusau @ 1:24 pm

Crawl-Anywhere

From the webpage:

April 2013 – Starting version 4.0, Crawl-Anywhere becomes an open-source project. Current version is 4.0.0-alpha

Stable version 3.x is still available at http://www.crawl-anywhere.com/

(…)

Crawl Anywhere is mainly a web crawler. However, Crawl-Anywhere includes all components in order to build a vertical search engine.

Crawl Anywhere includes :

Project home page : http://www.crawl-anywhere.com/

A web crawler is a program that discovers and read all HTML pages or documents (HTML, PDF, Office, …) on a web site in order for example to index these data and build a search engine (like google). Wikipedia provides a great description of what is a Web crawler : http://en.wikipedia.org/wiki/Web_crawler.

If you are gathering “very valuable intel” as in Snow Crash, a search engine will help.

Not do the heavy lifting but help.

August 25, 2012

Heritrix

Filed under: Webcrawler — Patrick Durusau @ 10:33 am

Heritrix

From the wiki page:

This is the public wiki for the Heritrix archival crawler project.

Heritrix is the Internet Archive’s open-source, extensible, web-scale, archival-quality web crawler project.

Heritrix (sometimes spelled heretrix, or misspelled or mis-said as heratrix/heritix/ heretix/heratix) is an archaic word for heiress (woman who inherits). Since our crawler seeks to collect and preserve the digital artifacts of our culture for the benefit of future researchers and generations, this name seemed apt.

All topical contributions to this wiki (corrections, proposals for new features, new FAQ items, etc.) are welcome! Register using the link near the top-right corner of this page.

Tool for creating a customized search collection or as reference code for a web crawler project.

I first saw this at Pete Warden’s Five Short Links for 24 August 2012.

January 27, 2012

Building a Scalable Web Crawler with Hadoop

Filed under: Hadoop,Webcrawler — Patrick Durusau @ 4:31 pm

Building a Scalable Web Crawler with Hadoop

Ahad Rana of Common Crawl presents an architectural view of a web crawler based on Hadoop.

You can access the data from Common Crawl.

But the architecture notes may be useful if you decide to crawl a sub-part of the web and/or you need to crawl “deep web” data in your organization.

December 7, 2011

YaCy Search Engine

Filed under: Search Engines,Webcrawler — Patrick Durusau @ 8:11 pm

YaCy – Decentralized Web Search

Has anyone seen this?

From the homepage:

YaCy is a free search engine that anyone can use to build a search portal for their intranet or to help search the public internet. When contributing to the world-wide peer network, the scale of YaCy is limited only by the number of users in the world and can index billions of web pages. It is fully decentralized, all users of the search engine network are equal, the network does not store user search requests and it is not possible for anyone to censor the content of the shared index. We want to achieve freedom of information through a free, distributed web search which is powered by the world’s users.

Limited demo interface: http://search.yacy.net/

Interesting idea.

It would be more interesting if it used a language that permitted dynamic updating of software while it is running. Otherwise, you are going to have the YaCy search engine you installed and nothing more.

Reportedly Google improves its search algorithm many times every quarter. How many of those changes are ad-driven they don’t say.

The documentation for YaCy is slim at best. Particularly on technical details. For example, uses a NoSQL database. OK, a custom one or one of the standard ones? I could go on but it would not produce any answers. As I explore the software I will post what I find out about it.

December 20, 2010

Building blocks of a scalable webcrawler

Filed under: Indexing,Search Engines,Webcrawler — Patrick Durusau @ 4:41 am

Building blocks of a scalable webcrawler

From Marc Seeger’s post about his thesis:

This thesis documents my experiences trying to handle over 100 million sets of data while keeping them searchable. All of that happens while collecting and analyzing about 100 new domains per second. It covers topics from the different Ruby VMs (JRuby, Rubinius, YARV, MRI) to different storage-backend (Riak, Cassandra, MongoDB, Redis, CouchDB, Tokyo Cabinet, MySQL, Postgres, …) and the data-structures that they use in the background.

Questions:

  1. What components would need to be added to make this a semantic crawling project? (3-5 pages, citations)
  2. What scalability issues would semantic crawling introduce? (3-5 pages, citations)
  3. Design a configurable, scalable, semantic crawler. (Project)

Powered by WordPress