lvm on top of multipath

If you use LVM on top of a multipathed storage, you should ensure the LVM setup will not access the storage devices directly (singlepath), bypassing multipath!

Example:

# pvs
  WARNING: Not using device /dev/mapper/clvol01 for PV uZSiMN-jjxu-OlZY-NfPO-hk22-6upj-seUeHj.
  WARNING: PV uZSiMN-jjxu-OlZY-NfPO-hk22-6upj-seUeHj prefers device /dev/sdb because device is used by LV.
  PV                       VG              Fmt  Attr PSize    PFree
  /dev/mapper/clvol02 vg_clvol02 lvm2 a--  ...
  /dev/sdb            vg_clvol01 lvm2 a--  ...

Here vg_clvol02 uses the multipath access, but vg_clvol01 the wrong singlepath/directaccess via the blockdevice sdb, causing the WARNING (and the warning is just the tip of the iceberg of problems you can get ...)

# multipath -ll
clvol02 (...) dm-11 ...
size=... features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 17:0:0:2 sdc 8:32 active ready running
  `- 18:0:0:2 sde 8:64 active ready running
clvol01 (...) dm-9 ...
size=... features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 18:0:0:1 sdd 8:48 active ready running
  `- 17:0:0:1 sdb 8:16 active ready running

The problems you can get with this inconsistent setup are for example in a proxmox cluster errors like:

TASK ERROR: unable to create VM 116 - lvcreate 'vg_clvol01/vm-116-disk-0' error: Cannot update volume group vg_clvol01 with duplicate PV devices.

or

TASK ERROR: can't activate LV '/dev/vg_clvol02/vm-103-disk-0': Cannot activate LVs in VG vg_clvol02 while PVs appear on duplicate devices.

And the LVM using singlepath access will not benefit of the intended multipath access (i.e. loss of redundancy, HA, etc.)!

FIXME ...

Well, we need to ensure that LVM will not use the direct singlepath devices, but prefer the multipath device...

This can be configured by using the device filter settings in /etc/lvm/lvm.conf. In the devices section you can set 2 filter options:

The syntax is quite easy: a list of regular expressions used to accept or reject block device path names. Each regex is delimited by a vertical bar '|' (or any character) and is preceded by 'a' to accept the path, or by 'r' to reject the path. The first regex in the list to match the path is used, producing the 'a' or 'r' result for the device.

So, choose the option you like to use (devices/global_filter vs. devices/filter) and configure it in order to just use the multipath devices and not the direct devices.

Considering the example above, I would set / append the following to the filter or global_filter option:

global_filter = [ ..., "a|/dev/mapper/clvol.*|", "a|/dev/sda[1-9]|", "r|/dev/sd[b-z].*|" ]

Explanation:

So, as you can see, the fix is quite specific to the setup you use, so please use the examples here just as a hint and adjust them all according your needs!

After applying your changes to /etc/lvm/lvm.conf, you can use vgscan or pvscan to apply them. Check the result using pvs. Depending on your setup, you might even need to regenerate your initrd!

Reboot and enjoy your fixed system :-)


linux
lvm
multipath
proxmox