Contributing limited amount of Storage as Slave to a Mater/Client..!!
OBJECTIVE: To make the Data Nodes contributing to the Master Node which also works as a Client by sharing limited Storage in a Hadoop Cluster.
STEPS:
SETTING DATANODE-1
- Creating EBS(Elastic Block Storage) volume for Datanode1. This is created so that it will help the Master to share it's storage to the Master.
Now This shows that Volume is Successfully created.
- Now Attaching this Volume to datanode1.
- Now Creating a Partition of 500MB.
- Partition is created named /dev/xvdf1.
- Now Format the HDD /dev/xvdf1
- For mounting the storage. I created /DN1 Directory.
- /DN1 Folder have storage 445MB. So we can say that after mounting volume to /DN1 the size of folder is 445MB.
- Now Configuring the hdfs-site.xml and core-site.xml file.
After this we will start the DataNode using the command hadoop-daemon.sh start datanode.
SETTING DATANODE-2
- Now Similarly, Creating one more EBS(Elastic Block Storage) volume for Datanode2. This Volume is created so that it share the storage to the Master Node.
After creating the Volume we can see that Volume named Datanode2 is created successfully.
- Now After this, Attaching the volume to Datanode2.
- Now After attaching the volume to Datanode2. Connecting the EC2 instance using Remote Login. Now Through this it can be checked that one more 1Gb EBS volume is attached.
- Now Creating a Partition of 400MB.
- Partition is created named /dev/xvdf1.
- Now Format the HDD /dev/xvdf1
- For mounting the storage. I created /DN2 Directory.
- /DN2 Folder have storage 354MB. So we can say that after mounting volume to /DN2 the size of folder is 354MB.
- Now Configuring the hdfs-site.xml and core-site.xml file in Datanode2. Similarly as done in Datanode1 .After this we will start the DataNode using the command hadoop-daemon.sh start Datanode.
In Mater Setup:
- After setting this we will check in Master Node that datanodes connected are successful or not and also checking their shared storage to the Master Node.
- By using hadoop dfsadmin -report command we will check the datanodes connection and their shared storage to the Master.
- So Through this The Conclusion is that Datanode1 shares its storage of about 476MB and Datanode2 shares its storage of about 379MB
So Through Partitioning the DataNodes. The DataNodes are sharing their some or limited amount of storage to the Master Node.
AWS Solutions Architect Professional | Certified Kubernetes Security Specialist | Certified Kubernetes Administrator | Azure Administrator
4yGreat work👍👍
Salesforce DevOps Engineer at Red Hat
4yGreat work Shrishti Kapoor Keep it up 👍👍👍