Table of Contents
Introduction
Moving a Volume group from one system to another system may be our requirement in some case. Let’s see how to moving a volume group including it’s multiple disks from one server to another server without missing any files.
To read about more Logical Volume Management navigate to below URL.
Below is the mount point we need under another system.
[root@prod-srv-01 ~]# df -hP /data01/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_data01-lvdata 20G 15G 5.9G 71% /data01
[root@prod-srv-01 ~]#
It’s contains files as well.
[root@prod-srv-01 ~]# ls -lthr /data01/
total 14G
-rw-r--r--. 1 root root 7.0G May 8 02:36 CentOS.iso
-rw-r--r--. 1 root root 7.0G May 8 02:37 RHEL.iso
[root@prod-srv-01 ~]#
At the end of this guide we should get all these files in another server.
List and Verify the Disks
We have two numbers of disks under vg_data01
volume group. Each disk is 10 GB in size. The logical volume size is 20 GB. So, while we move a volume group we should move two numbers of disks from the current system to another system.
List the number of current LV, VG and PV which we are planned to move.
[root@prod-srv-01 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cl lvm2 a-- <19.00g 0
/dev/sdb vg_data01 lvm2 a-- <10.00g 0
/dev/sdc vg_data01 lvm2 a-- <10.00g 0
[root@prod-srv-01 ~]#
[root@prod-srv-01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cl 1 2 0 wz--n- <19.00g 0
vg_data01 2 1 0 wz--n- 19.99g 0
[root@prod-srv-01 ~]#
[root@prod-srv-01 ~]#
[root@prod-srv-01 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cl -wi-ao---- <17.00g
swap cl -wi-ao---- 2.00g
lvdata vg_data01 -wi-ao---- 19.99g
[root@prod-srv-01 ~]#
Most important is, we need to know the list of Physical disks used under the specific volume group and logical volume.
[root@prod-srv-01 ~]# vgs -o +devices
VG #PV #LV #SN Attr VSize VFree Devices
cl 1 2 0 wz--n- <19.00g 0 /dev/sda2(0)
cl 1 2 0 wz--n- <19.00g 0 /dev/sda2(512)
vg_data01 2 1 0 wz--n- 19.99g 0 /dev/sdb(0)
vg_data01 2 1 0 wz--n- 19.99g 0 /dev/sdc(0)
[root@prod-srv-01 ~]#
[root@prod-srv-01 ~]# lvs -o +devices /dev/mapper/vg_data01-lvdata
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
lvdata vg_data01 -wi-a----- 19.99g /dev/sdb(0)
lvdata vg_data01 -wi-a----- 19.99g /dev/sdc(0)
[root@prod-srv-01 ~]#
Unmount and Deactivate Volume Group
Unmount the filesystem which required under other system.
# umount /data01/
Make the logical volume and Volume group offline.
# lvchange -an /dev/mapper/vg_data01-lvdata
# vgchange -an vg_data01
- -a – activate
- -n – no
Instead of -a
we can use --activate
as well.
[root@prod-srv-01 ~]# vgchange -an vg_data01
0 logical volume(s) in volume group "vg_data01" now active
[root@prod-srv-01 ~]#
Export/Moving a Volume Group
Export the Volume group which you need under another system.
[root@prod-srv-01 ~]# vgexport vg_data01
Volume group "vg_data01" successfully exported
[root@prod-srv-01 ~]#
Once exported, next step will be performed after Powering Off the virtual or Physical server.
Removing the Physical Disks
Shutdown the Virtual/Physical Machine, and remove the right disks. Attach the disks to other system where you need it.
After mapping/attaching the disks do a pvscan
. Here we should see the exported disk on pvscan output.
[root@prod-srv-02 ~]# pvscan
PV /dev/sdb is in exported VG vg_data01 [<10.00 GiB / 0 free]
PV /dev/sdc is in exported VG vg_data01 [<10.00 GiB / 0 free]
PV /dev/sda2 VG cl lvm2 [<19.00 GiB / 0 free]
Total: 3 [<38.99 GiB] / in use: 3 [<38.99 GiB] / in no VG: 0 [0 ]
[root@prod-srv-02 ~]#
The exported volume group will have the attribute with ax-
.
[root@prod-srv-02 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cl lvm2 a-- <19.00g 0
/dev/sdb vg_data01 lvm2 ax- <10.00g 0
/dev/sdc vg_data01 lvm2 ax- <10.00g 0
[root@prod-srv-02 ~]#
Import the Volume Group
Import the volume group by running vgimport
command with the imported volume group name.
[root@prod-srv-02 ~]# vgimport vg_data01
Volume group "vg_data01" successfully imported
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# vgscan
Found volume group "vg_data01" using metadata type lvm2
Found volume group "cl" using metadata type lvm2
[root@prod-srv-02 ~]#
Activate the Volume Group
Now activate the volume group by running vgchange
with activate options.
[root@prod-srv-02 ~]# vgchange -ay vg_data01
1 logical volume(s) in volume group "vg_data01" now active
[root@prod-srv-02 ~]#
- -a – activate
- -y – yes
If you know the volume group name we can activate the same by running lvchange
as well.
# lvchange -ay /dev/mapper/vg_data01-lvdata
Finally, verify and mount the filesystem.
Verify and mount the filesystem
After activating the volume group, Verify by printing all the PV, VG and LV.
[root@prod-srv-02 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 cl lvm2 a-- <19.00g 0
/dev/sdb vg_data01 lvm2 a-- <10.00g 0
/dev/sdc vg_data01 lvm2 a-- <10.00g 0
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cl 1 2 0 wz--n- <19.00g 0
vg_data01 2 1 0 wz--n- 19.99g 0
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root cl -wi-ao---- <17.00g
swap cl -wi-ao---- 2.00g
lvdata vg_data01 -wi-a----- 19.99g
[root@prod-srv-02 ~]#
Now we should get all the two disks under the same logical volume.
[root@prod-srv-02 ~]# vgs -o +devices
VG #PV #LV #SN Attr VSize VFree Devices
cl 1 2 0 wz--n- <19.00g 0 /dev/sda2(0)
cl 1 2 0 wz--n- <19.00g 0 /dev/sda2(512)
vg_data01 2 1 0 wz--n- 19.99g 0 /dev/sdb(0)
vg_data01 2 1 0 wz--n- 19.99g 0 /dev/sdc(0)
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# lvs -o +devices /dev/mapper/vg_data01-lvdata
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
lvdata vg_data01 -wi-a----- 19.99g /dev/sdb(0)
lvdata vg_data01 -wi-a----- 19.99g /dev/sdc(0)
[root@prod-srv-02 ~]#
Let’s create a directory and mount the logical volume to verify the existing filesystem and files.
[root@prod-srv-02 ~]# mkdir /data01
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# mount /dev/mapper/vg_data01-lvdata /data01/
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# df -hP /data01/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_data01-lvdata 20G 15G 5.9G 71% /data01
[root@prod-srv-02 ~]#
[root@prod-srv-02 ~]# ls -lthr /data01/
total 14G
-rw-r--r--. 1 root root 7.0G May 8 02:36 CentOS.iso
-rw-r--r--. 1 root root 7.0G May 8 02:37 RHEL.iso
[root@prod-srv-02 ~]#
That’s it, we have successfully completed with moving a Volume group from one system to other system.
Conclusion
Moving a volume group and underlying logical volumes from one server to another server as per our business requirement. Let’s come up with other logical volume management guide in future. Subscribe to our newsletter for feature updates.
Hi Babin Lonston ,
First of all, thankyou for post very good explanation on VG movement from one server to another in Linux environment.
Here, in a ideal world the downtime is a real challenge
So in above documented steps, its says “Once exported, next step will be performed after Powering Off the virtual or Physical server.
Removing the Physical Disks
Shutdown the Virtual/Physical Machine, and remove the right disks. Attach the disks to other system where you need it.
After mapping/attaching the disks do a pvscan. Here we should see the exported disk on pvscan output.”
It means after shutdown the source server (exported server) , at SAN and VMWare level they make a necessary deprovisioning from source (ex. server a) and provisioning it on target (i.e. server b) then only the source exportvg can be discovered in target LINUX OS level and imported back as-is (as per above documented procedure) ?
Does my interpretation holds correct if it is incorrect then Can you clarify if possible.? (even with above example , source LV associated mount point contains an *.iso file in /data01 FS mount point)
In short, wish to get a basic underlying clarification on above article that we need to downtime on both source (after export shutdown source A) & target server (shutdown target b and power on then import it) ?
Thanks & Regards
Saravanan S
Hi Saravanan Subbarayan,
Q // So in above documented steps, it says “Once exported, next step will be performed after Powering Off the virtual or Physical server.//
Removing the Physical Disks can be done without shutting down the VM or physical server. I have not given a try on this, because I got the downtime to perform the activity.
# echo 1 > /sys/block/sdb/device/delete
Have a look into this https://www.linuxsysadmins.com/how-to-remove-a-san-disk-or-storage-device-lun-from-linux-server/
Q // It means after shutdown the source server (exported server) , at SAN and VMWare level they make a necessary deprovisioning from source (ex. server a) and provisioning it on target (i.e. server b) then only the source exportvg can be discovered in target LINUX OS level and imported back as-is (as per above documented procedure) ?//
A// Yes, don’t delete the virtual disk from the VMware side. If it’s a SAN disk don’t delete the LUN from the storage side. Just map the same disk to destination server and follow the remaining steps.
Thanks & Regards
Babin Lonston