August 2014 Crawl Data Available by Stephen Merity.
From the post:
The August crawl of 2014 is now available! The new dataset is over 200TB in size containing approximately 2.8 billion webpages. The new data is located in the aws-publicdatasets bucket at /common-crawl/crawl-data/CC-MAIN-2014-35/.
To assist with exploring and using the dataset, we’ve provided gzipped files that list:
- all segments (CC-MAIN-2014-35/segment.paths.gz)
- all WARC files (CC-MAIN-2014-35/warc.paths.gz)
- all WAT files (CC-MAIN-2014-35/wat.paths.gz)
- all WET files (CC-MAIN-2014-35/wet.paths.gz)
By simply adding either s3://aws-publicdatasets/ or https://aws-publicdatasets.s3.amazonaws.com/ to each line, you end up with the S3 and HTTP paths respectively.
Thanks again to blekko for their ongoing donation of URLs for our crawl!
Have you considered diffing the same webpages from different crawls?
Just curious. Could be empirical evidence of which websites are stable and those were content could change from under you.