* Can anyone help me understand what is going wrong with a dm-multipah config I have?
@ 2015-11-18 16:55 Richard Sharpe
2016-01-25 13:23 ` Mauricio Faria de Oliveira
0 siblings, 1 reply; 2+ messages in thread
From: Richard Sharpe @ 2015-11-18 16:55 UTC (permalink / raw)
To: device-mapper development
Hi folks,
I have a dm-multipath config with multipathd running,
Here is the defaults section of /etc/multipath.conf:
defaults {
user_friendly_names yes
path_checker tur
path_grouping_policy failover
failback immediate
find_multipaths yes
}
Here is what the particular mpath device I am interested in looks like:
------------------
# multipath -l | head -10
mpathe (1NUTANIX
NFS_3486039150627630854115933815080_43e9039f_11ae_4220) dm-3
NUTANIX,VDISK
size=1.0P features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| `- 9:0:0:2 sdal 66:80 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 10:0:0:2 sdao 66:128 active undef running
|-+- policy='round-robin 0' prio=0 status=enabled
| `- 12:0:0:2 sdav 66:240 active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
`- 11:0:0:2 sdaw 67:0 active undef running
-------------------
Here are the iSCSI connection details (target IP addresses):
--------------------
# for f in /sys/class/iscsi_session/session*; do echo `ls $f/device |
tr \\\\n " " | cut -d\ -f4` `cat
$f/device/connection*/iscsi_connection/connection*/address` ; done
target12:0:0 10.4.80.39
target6:0:0 10.4.80.40
target7:0:0 10.4.80.41
target8:0:0 10.4.80.42
target9:0:0 10.4.80.40
target10:0:0 10.4.80.42
target11:0:0 10.4.80.41
-----------------------
Then I took down the target at target 9:0:0 (10.4.80.40) and I see the
expected behavior with iSCSI. Ie, the devices show their state as
being transport-offline.
However, there should be three additional paths to the device, so I
did an sg_inq on the mpathe device:
--------------------------
# sg_inq /dev/mapper/mpathe
Both SCSI INQUIRY and fetching ATA information failed on /dev/mapper/mpathe
--------------------------
Can't reach the LUN.
So, then I looked at each of the devices under that mpathe device:
----------------------------
# sg_inq /dev/sdal
sg_inq: error opening file: /dev/sdal: No such device or address
# sg_inq /dev/sdao
standard INQUIRY:
PQual=0 Device_type=0 RMB=0 LU_CONG=0 version=0x05 [SPC-3]
[AERC=0] [TrmTsk=0] NormACA=0 HiSUP=1 Resp_data_format=2
SCCS=0 ACC=0 TPGS=1 3PC=0 Protect=0 [BQue=0]
EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
[RelAdr=0] WBus16=0 Sync=0 [Linked=0] [TranDis=0] CmdQue=1
[SPI: Clocking=0x0 QAS=0 IUS=0]
length=64 (0x40) Peripheral device type: disk
Vendor identification: NUTANIX
Product identification: VDISK
Product revision level: 0
Unit serial number:
NFS_3486039150627630854115933815080_43e9039f_11ae_4220_915d_27cd8ab18b3c
# sg_inq /dev/sdav
standard INQUIRY:
PQual=0 Device_type=0 RMB=0 LU_CONG=0 version=0x05 [SPC-3]
[AERC=0] [TrmTsk=0] NormACA=0 HiSUP=1 Resp_data_format=2
SCCS=0 ACC=0 TPGS=1 3PC=0 Protect=0 [BQue=0]
EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
[RelAdr=0] WBus16=0 Sync=0 [Linked=0] [TranDis=0] CmdQue=1
[SPI: Clocking=0x0 QAS=0 IUS=0]
length=64 (0x40) Peripheral device type: disk
Vendor identification: NUTANIX
Product identification: VDISK
Product revision level: 0
Unit serial number:
NFS_3486039150627630854115933815080_43e9039f_11ae_4220_915d_27cd8ab18b3c
# sg_inq /dev/sdaw
standard INQUIRY:
PQual=0 Device_type=0 RMB=0 LU_CONG=0 version=0x05 [SPC-3]
[AERC=0] [TrmTsk=0] NormACA=0 HiSUP=1 Resp_data_format=2
SCCS=0 ACC=0 TPGS=1 3PC=0 Protect=0 [BQue=0]
EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
[RelAdr=0] WBus16=0 Sync=0 [Linked=0] [TranDis=0] CmdQue=1
[SPI: Clocking=0x0 QAS=0 IUS=0]
length=64 (0x40) Peripheral device type: disk
Vendor identification: NUTANIX
Product identification: VDISK
Product revision level: 0
Unit serial number:
NFS_3486039150627630854115933815080_43e9039f_11ae_4220_915d_27cd8ab18b3c
------------------------------------
As expected, the first device cannot be reached but the others can.
So, my question is:
Why does dm-multipath not switch to using the alternate paths?
Do I have a configuration problem? Have others seen this issue?
Do I need to provide special configuration for these weird SCSI
devices we are using?
--
Regards,
Richard Sharpe
(何以解憂?唯有杜康。--曹操)
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Can anyone help me understand what is going wrong with a dm-multipah config I have?
2015-11-18 16:55 Can anyone help me understand what is going wrong with a dm-multipah config I have? Richard Sharpe
@ 2016-01-25 13:23 ` Mauricio Faria de Oliveira
0 siblings, 0 replies; 2+ messages in thread
From: Mauricio Faria de Oliveira @ 2016-01-25 13:23 UTC (permalink / raw)
To: dm-devel
On 11/18/2015 02:55 PM, Richard Sharpe wrote:
> Why does dm-multipath not switch to using the alternate paths?
I seem to recall one case where the iSCSI-related timeouts were not
set to lower values, for multipathing. So, failover took more time
than expected (ending up in the default value of 120 seconds iirc).
It was documented in the open-iscsi readme, section 8 advanced config
(includes topics for multipath configurations).
Additional references:
[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/iscsi-replacements_timeout.html
[2]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/iscsi-modifying-link-loss-behavior-dmmultipath.html
Hope this helps,
--
Mauricio Faria de Oliveira
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-01-25 13:23 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-11-18 16:55 Can anyone help me understand what is going wrong with a dm-multipah config I have? Richard Sharpe
2016-01-25 13:23 ` Mauricio Faria de Oliveira
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).