"Common Crawl produces and maintains a repository of web crawl data that is openly accessible to everyone. The crawl currently covers 5 billion pages and the repository includes valuable metadata. The crawl data is stored by Amazon’s S3 service, allowing it to be bulk downloaded as well as directly accessed for map-reduce processing in EC2. This makes wholesale extraction, transformation, and analysis of web data cheap and easy. Small startups or even individuals can now access high quality crawl data that was previously only available to large search engine corporations.
For more information, please see the following pages: Processing Pipeline and Accessing the Data.
Please note that use of the Common Crawl site and/or data constitutes your binding acceptance of our Terms of Use."
For more information, please see the following pages: Processing Pipeline and Accessing the Data.
Please note that use of the Common Crawl site and/or data constitutes your binding acceptance of our Terms of Use."