July 2014 Crawl Data Available by Stephen Merity.
From the post:
The July crawl of 2014 is now available! The new dataset is over 266TB in size containing approximately 4.05 billion webpages. The new data is located in the aws-publicdatasets bucket at /common-crawl/crawl-data/CC-MAIN-2014-23/.
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2014-23/segment.paths.gz)
- all WARC files (CC-MAIN-2014-23/warc.paths.gz)
- all WAT files (CC-MAIN-2014-23/wat.paths.gz)
- all WET files (CC-MAIN-2014-23/wet.paths.gz)
By simply adding either s3://aws-publicdatasets/ or https://aws-publicdatasets.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
We’ve also released a Python library, gzipstream, that should enable easier access and processing of the Common Crawl dataset. We’d love for you to try it out!
Thanks again to blekko for their ongoing donation of URLs for our crawl!
Just in case you have exhausted all the possibilities with the April Crawl Data. 😉
Comparing the two crawls:
April – 183TB in size containing approximately 2.6 billion webpages
July – 266TB in size containing approximately 4.05 billion webpages
Just me but I would say there is new material in the July crawl.
The additional content could be CIA, FBI, NSA honeypots or broken firewalls but I rather doubt it.
Curious, how would you detect a honeypot from a crawl data? Thinking a daily honeypot report could be a viable product for some market segment.