LVM

From Leo's Notes
Last edited on 19 February 2022, at 00:22.

The Logical Volume Manager or LVM is a volume manager available on Linux. Functionally, volumes are similar to traditional partitions but with the following benefits:

  • Resizable volumes
  • Snapshots, A point-in-time copy of data useful for making consistent backups
  • Ability for a volume to span multiple disks

Cheat Sheet[edit | edit source]

Action Command
Show physical volume groups (Eg. disks) pvs
Show volume groups vgs

vgdisplay

Create a volume group (with name and backing device) vgcreate vg_name /dev/disk
Deactivate a volume group vgchange -a n vg_name
Remove a volume group vgremove vg_name
Show logical volumes lvs

lvdisplay

Create a logical volume

% of free space in vg with -l, or specific size with -L

lvcreate -n lv_name -l 50%FREE vg_name

lvcreate -n lv_name -L 50G vg_name

Create a snapshot capable of holding 10G of deltas lvcreate -s -n snap_lv_name --size 10G /dev/vg_name/lv_name
Resize a logical volume, +Size to increase, -Size to decrease. lvresize -L +10G /dev/vg_name/lv_name
Remove a logical volume lvremove lv_name

Not strictly LVM, but after resizing a volume, you will need to resize the filesystem with these commands:

Action Command
XFS Filesystem xfs_growfs /
EXT2/3/4 resize2fs /dev/vg_name/lv_name

Using LVM[edit | edit source]

Logical Volume Manager, showing disks to physical extents to volume groups to logical volumes
Logical Volume Manager, showing disks to physical extents to volume groups to logical volumes

LVM allows the creation of volumes which functionally is similar to partitions. This is done by allocating storage into a volume group (VG) which can then be broken up into separate logical volumes (LV).

Unlike a traditional partition scheme, LVM breaks down the backing storage (called physical volumes) into many small chunks called physical extents (PE) which are typically 4MB in size. These physical extents are assigned to their respective logical volumes (referred to as Logical Extents) and can be added or removed after creation allowing for volumes to grow or shrink as needed. One of the added benefits with this design allows for the ability to implement copy-on-write (COW) allowing for quick and space-efficient snapshotting.

Creating a volume group and logical volume[edit | edit source]

To quickly get a filesystem up and running using LVM, we will need to create a volume group and then a logical volume within the volume group.

# vgcreate ug_vg /dev/sdb
  Volume group "ug_vg " successfully created
  
  
# vgdisplay ug_vg 
  --- Volume group ---
  VG Name               ug_vg 
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               836.62 GiB
  PE Size               4.00 MiB
  Total PE              214175
  Alloc PE / Size       0 / 0
  Free  PE / Size       214175 / 836.62 GiB
  VG UUID               WeggQU-zQf3-j3mk-OAFd-iUCQ-EDTo-ezxhcb

# lvcreate -n ug -l 90%FREE ug_vg
  Logical volume "ug" created.

## After creating a LV, the available free PEs decrease as expected
# vgdisplay ug_vg
...
  VG Size               836.62 GiB
  PE Size               4.00 MiB
  Total PE              214175
  Alloc PE / Size       192757 / 752.96 GiB
  Free  PE / Size       21418 / 83.66 GiB

# LVM volumes are placed under /dev with the VG name and LV name.
# mkfs.ext4 /dev/ug_vg/ug

## If a VG needs to be renamed, use the vgrename command:
# vgrename uga ug_vg

Mounting and unmounting a volume[edit | edit source]

LVM volumes are exposed as a block device. You will need to format a filesystem of your choice and then mount it like any other device.

Continuing with the example above, we can mount /dev/ug_vg/ug as an ext4 filesystem with the usual mount command:

# mount /dev/ug_vg/ug /mnt/ug
# umount /mnt/ug

Similarly, you can add the filesystem to the /etc/fstab file so it is mounted automatically on start up.

Using snapshots[edit | edit source]

A snapshot creates a point-in-time copy of a volume. Unlike ZFS, a LVM snapshot creates a separate volume which appears as a separate block device which needs to be mounted separately. There is no need to worry about accidentally creating an unclean filesystem when creating a snapshot on a busy filesystem because LVM will make the necessary system calls during the snapshot process to sync and checkpoint the filesystem and ensure the snapshot copy is consistent and clean.

Creating, listing, and deleting snapshots[edit | edit source]

To create a new snapshot of an existing volume, use the lvcreate command with the -s or --snapshot option. You will need to specify the size of the snapshot. More on this later. Once created, the snapshot will appear in lvs with a s attribute and can be removed with the lvremove command.

# lvcreate --size 1G -s -n logicalvol_snap /dev/mapper/volgroup-logicalvol
  Logical volume "logicalvol_snap" created.

# lvs
  LV              VG       Attr       LSize   Pool Origin     Data%  Meta%  Move Log Cpy%Sync Convert
  logicalvol      volgroup owi-aos--- 220.00g
  logicalvol_snap volgroup swi-a-s---   1.00g      logicalvol 29.54

# lvremove -f /dev/volgroup/logicalvol_snap
  Logical volume "metrics_snap" successfully removed

Snapshot volume size[edit | edit source]

A new snapshot takes nearly no additional space. Any changes that are made to the origin volume or the snapshotted volume will need to be tracked in order for the snapshot to function. This is done transparently through the copy-on-write feature of LVM, but unlike ZFS, the size requirement of this copy-on-write is deducted from the snapshot volume rather than the whole storage pool as a whole. In other words, as the source volume or snapshot volume changes, the size requirement of the snapshot volume will grow in tandem to a maximum of the source volume size.

On a busy filesystem or for snapshots that you intend to keep around for a long time, the snapshot size should match the source volume size. The snapshot will become corrupt when the snapshot volume runs out of space.

If you have a snapshot that is about to run out of space, increase the snapshot volume size with lvextend:

# lvextend -L +10G /dev/volgroup/logicalvol_snap

You may also configure LVM to automatically extend snapshot volumes by editing /etc/lvm/lvm.conf and setting the snapshot_autoextend_threshold value.

Tasks[edit | edit source]

Recovering lost volumes[edit | edit source]

# vgscan                                  # Generates /etc/lvm/backup/VolGroup01
# cd /etc/lvm/backup
# vgcfgrestore -f VolGroup01 VolGroup01   # Restore VG using /etc/lvm/backup/VolGroup01
# vgscan                                  # verifies volume groups
# pvscan                                  # Looks for physical volumes
# vgchange VolGroup01 -a y                # Activates the volume group
# lvscan                                  # Looks for logical volumes
# mount /dev/VG01/LV00 /mnt/foo           # LVM volumes should be in /dev/vg/lv and should be mountable

Use vgscan to rescan for volume groups. This will create a file of the same name as the volume in /etc/lvm/backup/. To restore a volume, use the vgcfgrestore command. With the volume groups restored, use pvscan to look for physical volumes. Activate the volume using vgchange VolGroup01 -a y

With the volume activated, look for logical volumes using lvscan. Logical volumes should show up in /dev/VolGroup01/LogVol00 for example and can be mounted.

See also:

Expanding an existing volume[edit | edit source]

You can expand a volume by running the following set of commands (or use it as a script):

#!/bin/bash

## Assuming partition 3 in /dev/vda contains the LVM volume group
/usr/bin/growpart /dev/vda 3
/usr/sbin/pvresize -y -q /dev/vda3
/usr/sbin/lvresize -y -q -r -l +100%FREE /dev/mapper/*root
/usr/sbin/xfs_growfs /

In more detail, expanding a LVM volume requires the following steps (replace sda1 with your actual device):

  1. Expand the virtual disk and verify that the OS sees the disk being grown by running lsblk. If the OS isn't aware of the disk's updated size, reboot the system.
  2. Resize the partition using growpart and specify both the device and partition number. Eg: growpart /dev/sda 1. You can alternatively use fdisk to delete and re-create the partition if you don't have growpart, but this is less convenient.
  3. Resize the LVM physical volume with pvresize /dev/sda1. Confirm the resize by checking the available free space with pvscan.
  4. Resize the logical volume with lvresize. Eg: Add 1GB with lvresize -L +1G /dev/mapper/*root, or expand to the full size with lvresize -L +100%FREE /dev/mapper/*root
  5. Resize the underlying filesystem using xfs_growfs (XFS), resize2fs (EXT), or some other utility appropriate for you filesystem.

Here's an example run-through on me expanding a virtual disk on a VM:

## Expand the partition <code>/dev/sda2</code> using <code>fdisk</code>. Then do the following.
# pvresize /dev/sda2
  Physical volume "/dev/sda2" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
  
## Verify
# pvscan
  PV /dev/sda2   VG centos          lvm2 [<39.00 GiB / 20.00 GiB free]
  Total: 1 [<39.00 GiB] / in use: 1 [<39.00 GiB] / in no VG: 0 [0   ]
  
## Resize the LVs as required
# lvresize -L +1G /dev/centos/root
  Size of logical volume centos/root changed from <17.00 GiB (4351 extents) to <18.00 GiB (4607 extents).
  Logical volume centos/root successfully resized.

# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao---- <18.00g
  swap centos -wi-ao----   2.00g

XFS[edit | edit source]

For XFS filesystems, after expanding the volume, resize the filesystem with xfs_growfs:

## Expand your filesystems. If using XFS:
# xfs_growfs /
meta-data=/dev/mapper/centos-root isize=512    agcount=4, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=4455424, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 4455424 to 4717568
# xfs_info /
meta-data=/dev/mapper/centos-root isize=512    agcount=5, agsize=1113856 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=4717568, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

EXT2, EXT3, EXT4[edit | edit source]

For the EXT type filesystems, use resize2fs to expand the filesystem.

## If using ext4, use resize2fs
# resize2fs /dev/mapper/vg-lv

Shrinking an existing logical volume[edit | edit source]

Most filesystems don't support shrinking and your best way of going about shrinking a logical volume is to destroy and re-create the volume and the underlying filesystem.

As an example, I would like to size down an Oracle Linux 'oled' volume from 10G to 1G. This is what I did:

## Size before
# df -h
/dev/mapper/ocivolume-oled   10G  159M  9.9G   2% /var/oled

## Backup and unmount
# tar -czpf oled.tar.gz /var/oled
# umount /var/oled

## Remove
# lvremove /dev/mapper/ocivolume-oled
Do you really want to remove active logical volume ocivolume/oled? [y/n]: y
  Logical volume "oled" successfully removed

# lvcreate -L 1G -n oled ocivolume
WARNING: xfs signature detected on /dev/ocivolume/oled at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/ocivolume/oled.
  Logical volume "oled" created.

# mkfs.xfs  /dev/mapper/ocivolume-oled
meta-data=/dev/mapper/ocivolume-oled isize=512    agcount=4, agsize=65536 blks
...

## Remount and restore data
# mount /dev/mapper/ocivolume-oled  /var/oled/
# tar -xzpf /oled.tar.gz

## Size after
# df -h
/dev/mapper/ocivolume-oled 1014M   93M  922M  10% /var/oled

Troubleshooting[edit | edit source]

LVM warns with "open failed: No medium found"[edit | edit source]

When running any LVM related commands, you may get the following warning messages complaining about "No medium found":

# lvs
  /dev/sdb: open failed: No medium found
  /dev/sdc: open failed: No medium found
  /dev/sdb: open failed: No medium found
  /dev/sdc: open failed: No medium found
  LV       VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  ondemand VG_OOD -wi-ao---- 25.00g
  root     VG_OOD -wi-ao---- 10.00g
  swap     VG_OOD -wi-ao----  4.00g
  var      VG_OOD -wi-ao---- 50.00g

This is most likely because your system has removable devices such as a SD card reader or a Dell iDRAC virtual device and is presenting a block device to the system. You can verify this by running ls /dev/sd* or ls /dev/disk/by-id/* and checking if the device is listed. In the case above, /dev/sdb and /dev/sdc were virtual iDRAC devices.

# ls -al /dev/disk/by-id/
...
lrwxrwxrwx 1 root root   9 Jun  1 12:00 usb-iDRAC_LCDRIVE_20120430-0:0 -> ../../sdb        
lrwxrwxrwx 1 root root   9 Jun  1 12:00 usb-iDRAC_Virtual_CD_20120430-0:0 -> ../../sr1     
lrwxrwxrwx 1 root root   9 Jun  1 12:00 usb-iDRAC_Virtual_Floppy_20120430-0:1 -> ../../sdc
...

As a workaround, you may configure LVM commands from scanning these devices by adding a filter to /etc/lvm/lvm.conf to reject (r) the iDRAC devices.

filter =  [ "r|/dev/disk/by-id/usb-iDRAC.*|" ]

See Also[edit | edit source]