Contributing limited amount of Storage as Slave to a Mater/Client..!!

Contributing limited amount of Storage as Slave to a Mater/Client..!!

OBJECTIVE: To make the Data Nodes contributing to the Master Node which also works as a Client by sharing limited Storage in a Hadoop Cluster.

STEPS:

SETTING DATANODE-1

  • Creating EBS(Elastic Block Storage) volume for Datanode1. This is created so that it will help the Master to share it's storage to the Master.
No alt text provided for this image

Now This shows that Volume is Successfully created.

No alt text provided for this image
No alt text provided for this image
  • Now Attaching this Volume to datanode1.
No alt text provided for this image


No alt text provided for this image
  • Now Creating a Partition of 500MB.
No alt text provided for this image
  • Partition is created named /dev/xvdf1.
No alt text provided for this image
  • Now Format the HDD /dev/xvdf1
No alt text provided for this image
  • For mounting the storage. I created /DN1 Directory.
No alt text provided for this image
  • /DN1 Folder have storage 445MB. So we can say that after mounting volume to /DN1 the size of folder is 445MB.
No alt text provided for this image
  • Now Configuring the hdfs-site.xml and core-site.xml file.
No alt text provided for this image
No alt text provided for this image

After this we will start the DataNode using the command hadoop-daemon.sh start datanode.

SETTING DATANODE-2

  • Now Similarly, Creating one more EBS(Elastic Block Storage) volume for Datanode2. This Volume is created so that it share the storage to the Master Node.
No alt text provided for this image
No alt text provided for this image

After creating the Volume we can see that Volume named Datanode2 is created successfully.

No alt text provided for this image
  • Now After this, Attaching the volume to Datanode2.
No alt text provided for this image
No alt text provided for this image
  • Now After attaching the volume to Datanode2. Connecting the EC2 instance using Remote Login. Now Through this it can be checked that one more 1Gb EBS volume is attached.
No alt text provided for this image
  • Now Creating a Partition of 400MB.
No alt text provided for this image
  • Partition is created named /dev/xvdf1.
No alt text provided for this image
  • Now Format the HDD /dev/xvdf1
No alt text provided for this image
  • For mounting the storage. I created /DN2 Directory.
No alt text provided for this image
  • /DN2 Folder have storage 354MB. So we can say that after mounting volume to /DN2 the size of folder is 354MB.
No alt text provided for this image
  • Now Configuring the hdfs-site.xml and core-site.xml file in Datanode2. Similarly as done in Datanode1 .After this we will start the DataNode using the command hadoop-daemon.sh start Datanode.

In Mater Setup:

  • After setting this we will check in Master Node that datanodes connected are successful or not and also checking their shared storage to the Master Node.
  • By using hadoop dfsadmin -report command we will check the datanodes connection and their shared storage to the Master.
  • So Through this The Conclusion is that Datanode1 shares its storage of about 476MB and Datanode2 shares its storage of about 379MB
No alt text provided for this image
So Through Partitioning the DataNodes. The DataNodes are sharing their some or limited amount of storage to the Master Node.
No alt text provided for this image




Ayush Ganatra

AWS Solutions Architect Professional | Certified Kubernetes Security Specialist | Certified Kubernetes Administrator | Azure Administrator

4y

Great work👍👍

Laveena Jethani

Salesforce DevOps Engineer at Red Hat

4y

Great work Shrishti Kapoor Keep it up 👍👍👍

To view or add a comment, sign in

More articles by Shrishti Kapoor

Insights from the community

Others also viewed

Explore topics