Multipath linux and EMC CLARiiON
Hi,
today I want to post here my experience configuring multipath on CentOS 5.5 for using a EMC2 CLARiiON box.
Introduction
Native multipath on GNU/Linux is made up of different components, kernel modules, userspace tools that allow you to manage multiple path on SAN with a enterpise storage. This tutorial explains how to configure CentOS 5.5 x86_64 for connecting to a EMC2 CLARiiON CX4-480 box using multipath.
Architecture
Kernel components:
- hardware handler (manage failover/failback on particular hardware)
- generic device-mapper modules (dm_*)
- multipath module (dm_multipath)
Userspace components:
- multipath utility (manages path)
- multipathd daemon (monitors path and applies failback rules)
Configuration
Multipath configuration is done exclusively via /etc/multipath.conf
configuration file. For a comprehensive review of this configuration file please see http://sources.redhat.com/lvm2/wiki/MultipathUsageGuide. Next paragraphs assume that you know a little bit the multipath configuration file structure.
CentOS 5.5
CentOS 5.5 (aka RHEL 5.x) blacklist all devices by default. This means that multipath command will return no multipath device connected.
In order to activate device scanning for multipath you should comment out the default blacklist section on the top of config file:
# Blacklist all devices by default. Remove this to enable multipathing # on the default devices. #blacklist { # devnode "*" #}
Then add a new black list section (I want multipath to scan ony few devices):
blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss!c[0-9]d[0-9]*" }
Next we add a dedicated configuration stanza for EMC storage:
devices { # Configurazione specifica EMC CLARiiON device { vendor "DGC" product "*" product_blacklist "LUNZ" prio_callout "/sbin/mpath_prio_emc /dev/%n" path_grouping_policy group_by_prio features "1 queue_if_no_path" failback immediate hardware_handler "1 alua" } }
Please note that not inserting such stanza, multipath will use default settings from
/usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
At the end we should activate multipathd daemon using the following commands:
# chkconfig multipathd on # service multipathd start
It is also recommended to recreate initrd:
# mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)
Check multipath command output:
# multipath -ll mpath2 (36006016047c022002a3cc7a948afde11) dm-9 DGC,RAID 5 [size=80G][features=1 queue_if_no_path][hwhandler=1 alua][rw] \_ round-robin 0 [prio=100][enabled] \_ 1:0:1:1 sdac 65:192 [active][ready] \_ 0:0:0:1 sdb 8:16 [active][ready] \_ round-robin 0 [prio=20][enabled] \_ 0:0:1:1 sdk 8:160 [active][ready] \_ 1:0:0:1 sdt 65:48 [active][ready] mpath1 (36006016047c022000cb4a4bd48afde11) dm-8 DGC,RAID 5 [size=30G][features=1 queue_if_no_path][hwhandler=1 alua][rw] \_ round-robin 0 [prio=100][enabled] \_ 1:0:1:0 sdab 65:176 [active][ready] \_ 0:0:0:0 sda 8:0 [active][ready] \_ round-robin 0 [prio=20][enabled] \_ 0:0:1:0 sdj 8:144 [active][ready] \_ 1:0:0:0 sds 65:32 [active][ready]
CLARiiON CX4-480
In order to work correcty you must register hosts in this way:
- Manually register host WWN (I don’t like naviagent)
- Register initators group as:
- type: Clariion Open
- Failover mode: 4
Cluster
If you are configuring more nodes in a CentOS/RHEL cluster to access the same storage you can configure one node and replicate configuration to others.
Files to replicate are:
- /etc/multipath.conf (main config file)
- /var/lib/multipath/bindings (maps WWN to dev-mapper devices like mpath1, etc…)
Maintenance
Multipathd daemon and dm_multipath kernel module write to syslog the paths status, each time the status changes (eg. the path deads) they do specific tasks and they log informations to syslog.
To see a real time status of multipath use “multipath -ll” command.
Output has to be read as below:
mydev1 (3600a0b800011a1ee0000040646828cc5) dm-1 IBM,1815 FAStT ------ --------------------------------- ---- --- --------------- | | | | |-------> Product | | | |------------------> Vendor | | |-----------------------> sysfs name | |-------------------------------------------------> WWID of the device |------------------------------------------------------ ----------> User defined Alias name [size=512M][features=1 queue_if_no_path][hwhandler=1 rdac] --------- --------------------------- ---------------- | | |--------------------> Hardware Handler, if any | |---------------------------------------------> Features supported |---------------------------------------------------------------> Size of the DM device Path Group 1: \_ round-robin 0 [prio=6][active] -- ------------- ------ ------ | | | |----------------------------------------> Path group state | | |-----------------------------------------------> Path group priority | |--------------------------------------------------------------> Path selector and repeat count |-------------------------------------------------------------------> Path group level First path on Path Group 1: \_ 29:0:0:1 sdf 8:80 [active][ready] -------- --- ---- ------ ----- | | | | |---------------------------------> Physical Path state | | | |----------------------------------------> DM Path state | | |-------------------------------------------------> Major, minor numbers | |-------------------------------------------------------> Linux device name |--------------------------------------------------------------> SCSI information: host, channel, scsi_id and lun Second path on Path Group 1: \_ 28:0:1:1 sdl 8:176 [active][ready] Path Group 2: \_ round-robin 0 [prio=0][enabled] \_ 28:0:0:1 sdb 8:16 [active][ghost] \_ 29:0:1:1 sdq 65:0 [active][ghost]
Useful commands
multipath -v2
scans devices and reload devices maps (dev mapper)multipath -v2 -d
as above but runs in “dry run mode” (does not update device maps)multipath -F
flushes all WWN <-> mpath<nnn> bindings
Resources
- http://sources.redhat.com/lvm2/wiki/MultipathUsageGuide
- Host Connectivity Guide for Linux (P/N 300-003-865)
NOTE: This configuration seems not to detect new added LUNS, even scanning all HBA and flushing multipath, the only way is (still) rebooting…
NOTE (Sep 03): Emc Host Connectivity Guide recommends to use:
prio_callout "/sbin/mpath_prio_alua /dev/%n"
but this configuration cannot detect new added LUNS! Please use instead this line (already fixed in this article, tnx to deeb):
prio_callout "/sbin/mpath_prio_emc /dev/%n"
Hi. I’ve just been looking at the bugzilla record relating to this https://bugzilla.redhat.com/show_bug.cgi?id=482737. The recommendation in that record is to use the mpath_prio_emc prio_callout module rather than the mpath_prio_alua. I’ve only just started looking at this, so my info may be out of date I guess (this relates to 5.4 not 5.5).
Yes, it seems so. But EMC documentation (updated 2010) reccomends to use mpath_prio_alua… I will try to do some tests in order to verify which prio_callout module should be used. thanks.
I tested my multipath installation and you are right. Using mpath_prio_emc it works perfectly! Article updated. Many thanks deeb!
Thanks for the article! I’ve been fighting for quite some time trying to figure out from the EMC documentation how to get the combination of RHEL5.5, BFS and CX-4 to work, and you make it so simple! Lots of kudos!
Thanks, perfect article!
But, may be nessesary change in article
hardware_handler “1 alua”
to
hardware_handler “1 emc” ??
hi mike, if you want to benefit by Asymmetric Logical Unit Access (eg. for balancing traffic over 2 HBAs) you should use “1 alua”.
On CentOS/RHEL 6, try replacing
prio_callout “/sbin/mpath_prio_emc /dev/%n”
with
prio emc
[...] followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll – the output was [...]