In our previous video blog we have learnt the difference between Vertical scaling and Horizontal scaling architecture and why Big data is suitable for storing and processing large datasets in horizontal scaling architecture.
In this video blog we will be continuing discussing about the working of hadoop framework and how hadoop framework is fault tolerant.
So as we know today, the leading Big data technology is hadoop which is an open source framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware which provides the first viable platform for big data analytics.
Here, we have to know why hadoop cluster is built from commodity hardware like normal desktop computers or laptops for storing and processing large data sets.
Because these commodity hardware are very cheap compared to high end systems where it is very expensive to buy.
The actual data in the Hadoop framework will be stored on these commodity hardware which are also known as Datanodes.
The Hadoop framework is designed in such a way if one Datanode in the cluster goes down it makes sure the data will be secured through replication factor where it maintains the copy of that data in different Datanodes within the same cluster and user can fix the problem with that particular Datanode which is effected or can easily replace with the new Datanode.
For more insight on the concepts of Big Data and Hadoop you can refer to our below blogs
Big Data Terminologies You Must Know
How Hadoop Is Used In Organizations
Thus, we hope this blog gave you a detailed information on how hadoop secures the data present in the data nodes and use of commodity hardware in hadoop cluster.
Keep visiting our website for more post on Big Data and other technologies.
Leave a Reply