Hadoop is a software platform that lets one easily write and
run applications that process vast amounts of data.
- applications ›
What is Hortonworks Apache Hadoop (HDP 2.1.3) ?
The Apache Hadoop software library is a framework that allows for the
distributed processing of large data sets across clusters of computers
using a simple programming model.
It is designed to scale up from single servers to thousands of machines,each
offering local computation and storage. Rather than rely on hardware to deliver
high-avaiability, the library itself is designed to detect and handle failures
at the application layer, so delivering a highly-availabile service on top of a
cluster of computers, each of which may be prone to failures.
Apache Hadoop 2.4.1 consists of significant improvements over the previous
stable release (hadoop-1.x).
Here is a short overview of the improvments to both HDFS and MapReduce.
- HDFS Federation In order to scale the name service horizontally, federation uses multiple independent Namenodes/Namespaces. The Namenodes are federated, that is, the Namenodes are independent and don't require coordination with each other. The datanodes are used as common storage for blocks by all the Namenodes. Each datanode registers with all the Namenodes in the cluster. Datanodes send periodic heartbeats and block reports and handles commands from the Namenodes.
More details are available in the [HDFS Federation document] (http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/Federation.html)
- MapReduce NextGen aka YARN aka MRv2 The new architecture introduced in hadoop-0.23, divides the two major functions of the JobTracker: resource management and job life-cycle management into separate components. The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application‚ scheduling and coordination. An application is either a single job in the sense of classic MapReduce jobs or a DAG of such jobs.
The ResourceManager and per-machine NodeManager daemon, which manages the user processes of that machine, form the computation fabric.
The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
More details are available in the YARN document
This charm supports the following Hadoop roles:
- HDFS: namenode, secondarynamenode and datanode ( TBD HDFS Federation)
- YARN: ResourceManager, NodeManager
This supports deployments of Hadoop in a number of configurations.
HDP 2.1.3 Usage #1: Combined HDFS and MapReduce
In this configuration, the YARN ResourceManager is deployed on the same
service units as HDFS namenode and the HDFS datanodes also run YARN NodeManager::
juju deploy hdp-hadoop yarn-hdfs-master juju deploy hdp-hadoop compute-node juju add-unit -n 2 yarn-hdfs-master juju add-relation yarn-hdfs-master:namenode compute-node:datanode juju add-relation yarn-hdfs-master:resourcemanager compute-node:nodemanager
Known Limitations and Issues
Note that removing the relation between namenode and datanode is destructive!
The role of the service is determined at the point that the relation is added
(it must be qualified) and CANNOT be changed later!
A single hdfs-master can support multiple slave service deployments::
juju deploy hadoop hdfs-datacluster-02 juju add-unit -n 2 hdfs-datacluster-02 juju add-relation hdfs-namenode:namenode hdfs-datacluster-02:datanode
amir sanjar firstname.lastname@example.org
- (string) Space separated list of directories where DataNode will store file system image.
- (string) Space separated list of directories where NameNode will store file system image.
- (string) Space separated list of directories where SecondaryNameNode will store checkpoint image.
- (string) Space separated list of directories where YARN will store temporary data..
- (string) Space separated list of directories where YARN will store container log data.
- (string) Directory where ZooKeeper will store data.