Architecture of YARN resource management framework
YARN (Yet Another Resource Negotiator, another resource coordinator) is a general resource management system and scheduling platform. Its basic design idea is to combine MRv1 (MapReduce in Hadoop1.0) with JobTracker is split into two independent tasks, the global resource manager ResourceManager and the ApplicationMaster specific to each application. Among them, ResourceManager is responsible for resource management and allocation of the entire system, while ApplicationMaster is responsible for the management of a single application. Next, we describe the architecture of YARN through a diagram, as shown in Figure 1.
Figure 1 YARN architecture
In Figure 1, there are three core components of the YARN architecture, which are detailed as follows:
1. ResourceManager
ResourceManager is a global resource management system. It is responsible for the monitoring, allocation and management of the entire Yarn cluster resources. The specific work is as follows:
(1) Responsible for processing client requests p>
(2) Receive and monitor the resource status of NodeManager (NM)
(3) Start and monitor ApplicationMaster (AM)
(4) Resource allocation and scheduling
It is worth mentioning that ResourceManager contains two components, namely the scheduler and the application manager. The scheduler is based on capacity, queue and other constraints ( For example, each queue is allocated certain resources, can execute up to a certain number of jobs, etc.), allocate resources in the system to each running application. This scheduler is a "pure scheduler", which no longer engages in any work related to specific applications; the Applications Manager (Applications Manager) is responsible for managing all applications in the entire system, including application submission and scheduling. Coordinate resources to start ApplicationMaster, monitor ApplicationMaster running status, and restart in case of failure.
2. NodeManager
NodeManager is the resource and task manager on each node. On the one hand, it will regularly report the resource usage of the node to ResourceManager; on the other hand On the other hand, it will receive and process various requests from ApplicationMaster to start and stop the container (Container).
3.ApplicationMaster
Each application submitted by the user contains an ApplicationMaster, which is responsible for coordinating the resources from the ResourceManager and further allocating the obtained resources to various internal tasks, thereby Realize "secondary distribution". In addition, ApplicationMaster will also monitor the execution and resource usage of the container through NodeManager, and re-apply resources for the task to restart the task when the task fails.
The current YARN comes with two ApplicationMaster implementations. One is the instance program DistributedShell used to demonstrate the ApplicationMaster writing method. It can apply for a certain number of Containers to run a Shell command or Shell script in parallel; the other is to run MapReduce applications. ApplicationMaster-MRAppMaster of the program.
It should be noted that ResourceManager is responsible for monitoring ApplicationMaster and restarting ApplicationMaster when it fails, which greatly improves the scalability of the cluster. ResourceManager is not responsible for fault tolerance of internal tasks of ApplicationMaster. Task fault tolerance is completed by ApplicationMaster. Generally speaking, the main function of ApplicationMaster is resource scheduling, monitoring and fault tolerance.