|
|
Developer(s) | Apache Software Foundation |
---|---|
Stable release |
1.10 and 2.3 / May 6, 2015
|
Development status | Active |
Written in | Java |
Operating system | Cross-platform |
Type | Web Crawler |
License | Apache License 2.0 |
Website | nutch |
Apache Nutch is a highly extensible and scalable open source web crawler software project.
Nutch is coded entirely in the Java programming language, but data is written in language-independent formats. It has a highly modular architecture, allowing developers to create plug-ins for media-type parsing, data retrieval, querying and clustering.
The fetcher ("robot" or "web crawler") has been written from scratch specifically for this project.
Nutch originated with Doug Cutting, creator of both Lucene and Hadoop, and Mike Cafarella.
In June, 2003, a successful 100-million-page demonstration system was developed. To meet the multi-machine processing needs of the crawl and index tasks, the Nutch project has also implemented a MapReduce facility and a distributed file system. The two facilities have been spun out into their own subproject, called Hadoop.
In January, 2005, Nutch joined the Apache Incubator, from which it graduated to become a subproject of Lucene in June of that same year. Since April, 2010, Nutch has been considered an independent, top level project of the Apache Software Foundation.
In February 2014 the Common Crawl project adopted Nutch for its open, large-scale web crawl.
While it was once a goal for the Nutch project to release a global large-scale web search engine, that is no longer the case.
Nutch has the following advantages over a simple fetcher:
IBM Research studied the performance of Nutch/Lucene as part of its Commercial Scale Out (CSO) project. Their findings were that a scale-out system, such as Nutch/Lucene, could achieve a performance level on a cluster of blades that was not achievable on any scale-up computer such as the POWER5.