iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://en.wikipedia.org/wiki/Common_Crawl
Common Crawl - Wikipedia Jump to content

Common Crawl

From Wikipedia, the free encyclopedia
Common Crawl
Type of business501(c)(3) non-profit
Founded2007
HeadquartersSan Francisco, California; Los Angeles, California, United States
Founder(s)Gil Elbaz
Key peoplePeter Norvig, Rich Skrenta, Eva Ho
URLcommoncrawl.org
Content license
Apache 2.0 (software) [clarification needed]

Common Crawl is a nonprofit 501(c)(3) organization that crawls the web and freely provides its archives and datasets to the public.[1][2] Common Crawl's web archive consists of petabytes of data collected since 2008.[3] It completes crawls generally every month.[4]

Common Crawl was founded by Gil Elbaz.[5] Advisors to the non-profit include Peter Norvig and Joi Ito.[6] The organization's crawlers respect nofollow and robots.txt policies. Open source code for processing Common Crawl's data set is publicly available.

The Common Crawl dataset includes copyrighted work and is distributed from the US under fair use claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other legal jurisdictions.[7]

English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents.[8]

History

[edit]

Amazon Web Services began hosting Common Crawl's archive through its Public Data Sets program in 2012.[9]

The organization began releasing metadata files and the text output of the crawlers alongside .arc files in July 2012.[10] Common Crawl's archives had only included .arc files previously.[10]

In December 2012, blekko donated to Common Crawl search engine metadata blekko had gathered from crawls it conducted from February to October 2012.[11] The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive SEO."[11]

In 2013, Common Crawl began using the Apache Software Foundation's Nutch webcrawler instead of a custom crawler.[12] Common Crawl switched from using .arc files to .warc files with its November 2013 crawl.[13]

A filtered version of Common Crawl was used to train OpenAI's GPT-3 language model, announced in 2020.[14]

Timeline of Common Crawl data

[edit]

The following data have been collected from the official Common Crawl Blog[15] and Common Crawl's API.[16]

Crawl date Size in TiB Billions of pages Comments
April 2024 386 2.7 Crawl conducted from April 12 to April 24, 2024
February/March 2024 425 3.16 Crawl conducted from February 20 to March 5, 2024
December 2023 454 3.35 Crawl conducted from November 28 to December 12, 2023
June 2023 390 3.1 Crawl conducted from May 27 to June 11, 2023
April 2023 400 3.1 Crawl conducted from March 20 to April 2, 2023
February 2023 400 3.15 Crawl conducted from January 26 to February 9, 2023
December 2022 420 3.35 Crawl conducted from November 26 to December 10, 2022
October 2022 380 3.15 Crawl conducted in September and October 2022
April 2021 320 3.1
November 2018 220 2.6
October 2018 240 3.0
September 2018 220 2.8
August 2018 220 2.65
July 2018 255 3.25
June 2018 235 3.05
May 2018 215 2.75
April 2018 230 3.1
March 2018 250 3.2
February 2018 270 3.4
January 2018 270 3.4
December 2017 240 2.9
November 2017 260 3.2
October 2017 300 3.65
September 2017 250 3.01
August 2017 280 3.28
July 2017 240 2.89
June 2017 260 3.16
May 2017 250 2.96
April 2017 250 2.94
March 2017 250 3.07
February 2017 250 3.08
January 2017 250 3.14
December 2016 2.85
October 2016 3.25
September 2016 1.72
August 2016 1.61
July 2016 1.73
June 2016 1.23
May 2016 1.46
April 2016 1.33
February 2016 1.73
November 2015 151 1.82
September 2015 106 1.32
August 2015 149 1.84
July 2015 145 1.81
June 2015 131 1.67
May 2015 159 2.05
April 2015 168 2.11
March 2015 124 1.64
February 2015 145 1.9
January 2015 139 1.82
December 2014 160 2.08
November 2014 135 1.95
October 2014 254 3.7
September 2014 220 2.8
August 2014 200 2.8
July 2014 266 3.6
April 2014 183 2.6
March 2014 223 2.8 First Nutch crawl
Winter 2013 148 2.3 Crawl conducted from December 4 through December 22, 2013
Summer 2013 ? ? Crawl conducted from May 2013 through June 2013. First WARC crawl
2012 ? ? Crawl conducted from January 2012 through June 2012. Final ARC crawl
2009-2010 ? ? Crawl conducted from July 2009 through September 2010
2008-2009 ? ? Crawl conducted from May 2008 through January 2009

Norvig Web Data Science Award

[edit]

In corroboration with SURFsara, Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in Benelux.[17][18] The award is named for Peter Norvig who also chairs the judging committee for the award.[17]

Colossal Clean Crawled Corpus

[edit]

Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the T5 language model series in 2019.[19] There are some concern over copyrighted content in the C4.[20]

References

[edit]
  1. ^ Rosanna Xia (February 5, 2012). "Tech entrepreneur Gil Elbaz made it big in L.A." Los Angeles Times. Retrieved July 31, 2014.
  2. ^ "Gil Elbaz and Common Crawl". NBC News. April 4, 2013. Retrieved July 31, 2014.
  3. ^ "So you're ready to get started". Common Crawl. Retrieved 9 June 2023.
  4. ^ Lisa Green (January 8, 2014). "Winter 2013 Crawl Data Now Available". Retrieved June 2, 2018.
  5. ^ "Startups - Gil Elbaz and Nova Spivack of Common Crawl - TWiST #222". This Week In Startups. January 10, 2012.
  6. ^ Tom Simonite (January 23, 2013). "A Free Database of the Entire Web May Spawn the Next Google". MIT Technology Review. Archived from the original on June 26, 2014. Retrieved July 31, 2014.
  7. ^ Schäfer, Roland (May 2016). "CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws". Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). Portorož, Slovenia: European Language Resources Association (ELRA): 4501.
  8. ^ "Statistics of Common Crawl Monthly Archives by commoncrawl". commoncrawl.github.io. Retrieved 2023-04-02.
  9. ^ Jennifer Zaino (March 13, 2012). "Common Crawl to Add New Data in Amazon Web Services Bucket". Semantic Web. Archived from the original on July 1, 2014. Retrieved July 31, 2014.
  10. ^ a b Jennifer Zaino (July 16, 2012). "Common Crawl Corpus Update Makes Web Crawl Data More Efficient, Approachable for Users to Explore". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  11. ^ a b Jennifer Zaino (December 18, 2012). "Blekko Data Donation Is s Big Benefit to Common Crawl". Semantic Web. Archived from the original on August 12, 2014. Retrieved July 31, 2014.
  12. ^ Jordan Mendelson (February 20, 2014). "Common Crawl's Move to Nutch". Common Crawl. Retrieved July 31, 2014.
  13. ^ Jordan Mendelson (November 27, 2013). "New Crawl Data Available!". Common Crawl. Retrieved July 31, 2014.
  14. ^ Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini (2020-06-01). "Language Models Are Few-Shot Learners". p. 14. arXiv:2005.14165 [cs.CL]. the majority of our data is derived from raw Common Crawl with only quality-based filtering.
  15. ^ "Blog – Common Crawl".
  16. ^ "Collection info - Common Crawl".
  17. ^ a b Lisa Green (November 15, 2012). "The Norvig Web Data Science Award". Common Crawl. Retrieved July 31, 2014.
  18. ^ "Norvig Web Data Science Award 2014". Dutch Techcentre for Life Sciences. Archived from the original on August 15, 2014. Retrieved July 31, 2014.
  19. ^ Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–67. ISSN 1533-7928.
  20. ^ Hern, Alex (2023-04-20). "Fresh concerns raised over sources of training material for AI systems". The Guardian. ISSN 0261-3077. Retrieved 2023-04-21.
[edit]