Review of Yarn: Hadoop MapReduce v2

Apache Hadoop has released version 2.6.0, which splits the job scheduler and resource management in old versions. The structure in old Hadoop is as following.

Hadoop MapReduce Framework Architecture
Hadoop MapReduce Framework Architecture

The original structure has a clear workflow:

  1. JobClient submit a job to JobTracker, which is the center of the whole structure. JobTracker creates tasks, allocates the machine to run specific task, and control the states (such as failure and restart) of jobs. It also sends heartbeat to the machines (TaskTracker) in the cluster.
  2. Each machine has a TaskTracker, which keeps watching the resources and tasks in that machine. It collects the information of resources and tasks and sends it to JobTracker via heartbeat. The JobTracker uses this information to further allocate jobs.

But with the growing scale of distributed system and workload, many issues of the old framework have occurred:

  1. JobTracker is easy to be broken down as a high-workload center node. It also consumes too many memory resources when the jobs increase to a certain number (the old Hadoop usually supports cluster with no more than 4,000 nodes).
  2. TaskTracker uses the number of tasks as the quantity of resources. However, different tasks consume different amount of memory or CPU capacity.

MapReduce has undergone a complete overhaul in Hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2) or YARN [1]. The structure has been developed as following.

Hadoop Yarn Framework Architecture
Hadoop Yarn Framework Architecture

Yarn splits the JobTracker into two independent parts, ApplicationMaster which schedules the tasks on its machine and ResourceManager which globally manages the resources. The ApplicationMaster cooperates with the NodeManager on the same node to control the tasks based on the resources got from ResourceManager.

ResourceManager is a kind of scheduler which schedules the applications and allocates the resources according to the applications’ resource requirements (but the resource status watching, tracking and reporting are handled by NodeManager; the application status watching, tracking and controlling are handled by ApplicationMaster). The requirements are sent from ApplicationMaster.

The Container is a framework used for resource isolation, which refers to Mesos.


Reference

[1] Hadoop Yarn: http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html

Leave a Reply

Time limit is exhausted. Please reload CAPTCHA.