What is the Hadoop Architecture?

Asked by Last Modified  

Follow 1
Answer

Please enter your answer

Hadoop follows a distributed computing architecture designed to handle and process large-scale data across a cluster of commodity hardware. The core components of Hadoop architecture include: Hadoop Distributed File System (HDFS): HDFS is the primary storage system of Hadoop. It divides large files...
read more
Hadoop follows a distributed computing architecture designed to handle and process large-scale data across a cluster of commodity hardware. The core components of Hadoop architecture include: Hadoop Distributed File System (HDFS): HDFS is the primary storage system of Hadoop. It divides large files into smaller blocks (typically 128 MB or 256 MB) and distributes them across multiple nodes in a cluster. HDFS provides fault tolerance by replicating these blocks across nodes. NameNode: The NameNode is a crucial component in the HDFS architecture. It manages metadata about files and directories, including their locations, permissions, and block details. The NameNode does not store the actual data but tracks its location in the cluster. DataNode: DataNodes are responsible for storing and managing the actual data blocks. They store data locally on their respective nodes and communicate with the NameNode to report the status of the stored blocks. DataNodes are distributed across the cluster. Resource Manager: The Resource Manager is part of the YARN (Yet Another Resource Negotiator) framework and manages the allocation of resources in the cluster. It receives resource requests from applications and coordinates with NodeManagers for resource assignment. NodeManager: NodeManagers run on individual nodes in the cluster and are responsible for managing resources (CPU, memory) on that node. They communicate with the Resource Manager to request and release resources and oversee the execution of application tasks. JobTracker (Deprecated): In older versions of Hadoop using the MapReduce 1 (MRv1) architecture, the JobTracker was responsible for coordinating MapReduce jobs. However, this architecture has been largely deprecated in favor of YARN. Newer versions use the ResourceManager for job coordination. TaskTracker (Deprecated): Similar to the JobTracker, the TaskTracker was part of the MapReduce 1 architecture and executed individual tasks of a MapReduce job. This component has been deprecated in YARN-based architectures, where NodeManagers handle task execution. YARN (Yet Another Resource Negotiator): YARN is a resource management layer that separates the resource management and job scheduling functions in Hadoop. It allows multiple applications to share resources on the same cluster. YARN includes ResourceManager and NodeManager components. MapReduce: MapReduce is a programming model and processing engine used for parallel processing of large datasets. It consists of two main phases: Map phase, where data is processed and transformed, and Reduce phase, where results are aggregated. Secondary NameNode: The Secondary NameNode is not a backup or failover NameNode. It periodically merges the edits log with the current snapshot of the file system to prevent the edits log from becoming too large. It assists the NameNode in checkpointing. Hadoop Ecosystem Components: Beyond the core components, the Hadoop ecosystem includes various projects and tools that extend its functionality. Examples include Apache Hive for data warehousing, Apache HBase for NoSQL storage, Apache Spark for in-memory processing, and many others. The architecture described here is based on a simplified, high-level view. Hadoop's architecture can vary based on the distribution (Cloudera, Hortonworks, etc.) and specific configurations within an organization. The shift to YARN has played a significant role in making Hadoop more versatile by allowing various data processing frameworks to run on the same cluster. read less
Comments

Related Questions

Which Hadoop course should I take?
Take apache spark and scala course . Spark is high on demand now and one of the highly efficient and heavily used bigdata tools in market.I do provide Apache spark with scala and python course . You can reach me out for more details
Srinivasan
0 0
6

Hi, currently I am working as php developer having 5 year of experience, I want to change the technology, so can any one suggest me which technology is better for me and in future also (hadoop or node with angular js).

Big Data is cake for data processing whereas Angular is for UI framework. I would recommend you to consider learning Big Data technologies.
Srikanth

I want to take online classes on database/ ETL testing.

 

Also i look forward to teach Mathematics/Science for class X-XII

Both are co-related to each other but compare to DBA Jobs, ETL job is more demanding hence you take class for informatica tools and others.
Varsha
0 0
7

Now ask question in any of the 1000+ Categories, and get Answers from Tutors and Trainers on UrbanPro.com

Ask a Question

Related Lessons

Why is the Hadoop essential?
Capacity to store and process large measures of any information, rapidly. With information volumes and assortments always expanding, particularly from web-based life and the Internet of Things (IoT), that...

CheckPointing Process - Hadoop
CHECK POINTING Checkpointing process is one of the vital concept/activity under Hadoop. The Name node stores the metadata information in its hard disk. We all know that metadata is the heart core...

Python Programming or R- Programming
Most of the students usually ask me this question before they join the classes, whether to go with Python or R. Here is my short analysis on this very common topic. If you have interest/or having a job...

Loading Hive tables as a parquet File
Hive tables are very important when it comes to Hadoop and Spark as both can integrate and process the tables in Hive. Let's see how we can create a hive table that internally stores the records in it...

Linux File System
Linux File system: Right click on Desktop and click open interminal Login to Linux system and run simple commands: Check present Working Directory: $pwd /home/cloudera/Desktop Change Directory: $cd...

Recommended Articles

In the domain of Information Technology, there is always a lot to learn and implement. However, some technologies have a relatively higher demand than the rest of the others. So here are some popular IT courses for the present and upcoming future: Cloud Computing Cloud Computing is a computing technique which is used...

Read full article >

Big data is a phrase which is used to describe a very large amount of structured (or unstructured) data. This data is so “big” that it gets problematic to be handled using conventional database techniques and software.  A Big Data Scientist is a business employee who is responsible for handling and statistically evaluating...

Read full article >

We have already discussed why and how “Big Data” is all set to revolutionize our lives, professions and the way we communicate. Data is growing by leaps and bounds. The Walmart database handles over 2.6 petabytes of massive data from several million customer transactions every hour. Facebook database, similarly handles...

Read full article >

Hadoop is a framework which has been developed for organizing and analysing big chunks of data for a business. Suppose you have a file larger than your system’s storage capacity and you can’t store it. Hadoop helps in storing bigger files than what could be stored on one particular server. You can therefore store very,...

Read full article >

Find Hadoop near you

Looking for Hadoop ?

Learn from the Best Tutors on UrbanPro

Are you a Tutor or Training Institute?

Join UrbanPro Today to find students near you