Hadoop server roles
- Master nodes.
- Slave nodes.
- Client nodes.
- Master nodes mainly will do two things.
- How and where the data is storing.
- How to process the stored data in parallel way.
- Master node having master daemons
- Master daemons are Name node and Job Tracker
- Name node coordinates with HDFS to store the data
- Job Tracker coordinates with MapReduce to process the data in parallel way.
- Slave nodes will store the actual data (raw data), and running the computations over the data
- Slave nodes having slave daemons.
- Slave daemons are Data node and Task tracker.
- Data node is slave to the Name node
- Data node will communicate with Name node to receive the instructions.
- Task tracker is slave to the Job tracker
- Task tracker will communicate with Job tracker to receive the instructions.
- So, Slave Daemons work as per the Master Daemons instructions.
- The main role of client node is to load the data into the cluster.
- Submit the MapReduce jobs.
- MapReduce job describes how that data should be processed.
- Client node will receives the final results from finished jobs.
Thanks for your time.
-Nireekshan