*** Welcome to piglix ***

Apache Hadoop

Apache Hadoop
Hadoop Logo
Developer(s) Apache Software Foundation
Initial release December 10, 2011; 5 years ago (2011-12-10)
Stable release
2.7.3 / August 25, 2016 (2016-08-25)
Repository git-wip-us.apache.org/repos/asf/hadoop.git
Development status Active
Written in Java
Operating system Cross-platform
Type Distributed file system
License Apache License 2.0
Website hadoop.apache.org

Apache Hadoop (pronunciation: /həˈdp/) is an open-source software framework used for distributed storage and processing of big data sets using the MapReduce programming model. It consists of computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality, where nodes manipulate the data they have access to. This allows the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.

The base Apache Hadoop framework is composed of the following modules:

The term Hadoop has come to refer not just to the base modules above, but also to the ecosystem, or collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper, Cloudera Impala, Apache Flume, Apache Sqoop, Apache Oozie, Apache Storm.


...
Wikipedia

...