top of page
Search

Integrating LVM with hadoop..

 ⭕Task description 📄

7.1: Elasticity Task

a ) Integrating LVM with Hadoop and providing Elasticity to DataNode Storage


 ⭕Prerequisites of the given task📄

  • Set up name node.

  • Set up data node

  • Attach hard disk

  • Concepts of LVM must be cleared.

 ⭕Some basic concepts📄


🔰What is Volume group..?

ree

Volume group is a combination of used and unused disk space. From this volume group, we can extract multiple logical volumes depending upon our requirements. Moreover, the exact space allocated to each logical volume can also be increased or decreased depending upon our needs.



ree

🔰What is LVM ..?


Concept of using LVM is very much similar to virtualization; therefore, its working is also more or less the same as virtualization. Generally, we have a physical device that is divided into multiple partitions. All these partitions have a file system installed on them which can be used to manage these partitions. We can create as many virtual storage volumes on top of a single storage device as you want. The logical storage volumes thus created can be expanded or shrunk according to your growing or reducing storage needs.

 

Implementation of the given task :-

  • Create physical volume, volume group and logical volume .

  • format and mount it to the directory shared by data node .

  • increase the shared storage of data node on the fly using LVM concept.

  • check the storage shared by data node .

Step1:- Before creating the physical volume we have to confirm that the hard disk is attached or not .so for the confirmation we are going to run one command..


Command:-
lsblk
ree

Here we can see that the sdb and sdc hard disk is attached successfully .So now we are going to create on physical volume with the help of sdb hard disk . command to create the physical volume.

(in my case hard disk name is /dev/sdb )..

Command :-
pvcreate /dev/sdb
ree

Here we can see that the physical volume is created successfully ,Now we are going to create one volume group .command to create volume group :-

command:-
vgcreate task7vg /dev/sdb

In my case , name of volume group is task7vg

ree

Now we are going to run one command through which we can confirm that volume group is successfully created or not.

command:-
vgdisplay task7vg
ree

Here we can see that the volume group "task7vg " is created successfully and it's size is almost 8 G which is the size of our hard disk ..Now we are going to create one logical volume using this volume group of size 6 G.In my case the name of logical volume is "task7lv" Command to create logical volume is ..

command:-
lvcreate --size 6G --name
ree

Now we are going to check the complete detail of the above created logical volume, for this we have to give the path of logical volume .command is :-

command:-
lvdisplay /dev/taskvg/tasklv
ree

Here we can see that the logical volume of task7lv is created successfully and the size is 6 G. Hence here step1 is completed successfully now move to the next part..

Step2 :- Now we are going to format it ,but first we have to load the driver and the we can format. Command to load the driver

command :-
udevadm settle
ree

Now we are going to format it.

Command:-
mkfs.ext4 /dev/task7vg/task7lv
ree

Now we are going to make on directory to mount that partition..

ree

Now we are going to mount it to the above created directory..

ree

Now we have to configure the hdfs-site.xml file . But before this we going to check how much volume it is shared to the name node . Command to check it is ..

 Command:-
 hadoop dfsadmin -report
ree

Here we can see that the configured capacity is around 47 GB .Now we are going to configure the hdfs-site.xml file of data node

ree

Here we can see that now the directory name is slave_lvm which we are created above ,Now again we are going to check that how much configured capacity is now..

ree

Here we can see that the now configured capacity is almost 6GB that is the size of logical volume . Hence here step2 is done successfully , now move to the next part ..

Step3 :- Now we are going to increase the size +4 GB on the fly..

ree

But here we can see that the volume group had less than 2GB free. So first we are going to create one physical volume using /dev/sdc hard disk..

pvcreate /dev/sdc
ree

Here we can see that the physical volume is created successfully .Now we are going to extend the above created volume group ("task7vg" ).

Command:-
vgextend task7vg /dev/sdc
ree

Here we can see that the volume group is extended successfully..

ree

Now we can see that the volume group is of size almost 16 GB. And if you noticed , above we see that the volume group had almost 2gb free but now we can see that the free size is almost 10 GB. Now we are going to extend the logical volume of size +4GB and the command is ..

command:-
lvextend --size +4G /dev/task7vg/task7lv
ree

Here we can see that the logical volume is created successfully. Now we are going to see the complete info of task7lv logical volume through one command ..

Command :-
lvdisplay /dev/task7vg/task7lv
ree

Here we can see that the now logical volume is of size 10 Gb.

ree

Now we are going to check the configured capacity ..

Command:-
hadoop dfsadmin -report
ree

Here we can see that the configured capacity is now almost 10 Gb. So in this way we are integrating LVM with Hadoop and providing Elasticity to DataNode Storage..

Here our task is completed

Thank you for visiting my article😊😊



 
 
 

Comments


Post: Blog2_Post
  • Facebook
  • LinkedIn

©2020 by My Site. Proudly created with Wix.com

bottom of page