Linux LVM users
 help / color / mirror / Atom feed
* [linux-lvm] SAN Storage/mirror/lvm
@ 2004-07-23  8:52 Thomas Meller
  2004-07-23 11:04 ` Franc Carter
  0 siblings, 1 reply; 4+ messages in thread
From: Thomas Meller @ 2004-07-23  8:52 UTC (permalink / raw)
  To: linux-lvm

Maybe everything is all right with what I am doing - maybe not.

I have heard roumors that my setup may cause troubles.
Does anybody know something about it?

I try to set up a cluster on a SAN with shared disks, mirrored over 2 storage controllers.

Using QLogic HBAs, I have set
ql2xfailover=1
to use the multipathing feature.

We have 2 separate fabrics to connect to the storage and 2 HBAs in each machine.

Disk layout:

A           B   (controllers)
sda        sdf
sdb        sdg
sdc mirror sdh -+
sdd mirror sdi  |
sde mirror sdj -+
                |
                +-- volume group

Also, we do not have much write thruput to the disks.
I use reiserfs because I could resize it w/o remounting.

Austin Gonyou has written something about write cache mirroring - what is that?
Necessary?
Howto?
I use LVM1

Am I in danger?

Thanks

Thomas

-- 
Thomas Meller
mailto: thomas.meller@t-systems.ch
mailto: thomas.meller@gmx.net
----
...Our continuing mission: To seek out knowledge of C, to explore strange
UNIX commands, and to boldly code where no man page has ever been seen

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] SAN Storage/mirror/lvm
  2004-07-23  8:52 Thomas Meller
@ 2004-07-23 11:04 ` Franc Carter
  0 siblings, 0 replies; 4+ messages in thread
From: Franc Carter @ 2004-07-23 11:04 UTC (permalink / raw)
  To: thomas.meller, LVM general discussion and development


I setup a similar system on a HDS-9570 but with only one
card in each machine - 'assuming' that the failover in the
driver would cope if a controller on the HDS failed (as
opposed to a QLogic failure)

This worked fine if I disconncected a path in the fabric but
when we took a controller of line the failover failed.

Since then I have used md multipath which has tested fine

On Fri, Jul 23, 2004 at 10:52:45AM +0200, Thomas Meller wrote:
> Maybe everything is all right with what I am doing - maybe not.
> 
> I have heard roumors that my setup may cause troubles.
> Does anybody know something about it?
> 
> I try to set up a cluster on a SAN with shared disks, mirrored over 2 storage controllers.
> 
> Using QLogic HBAs, I have set
> ql2xfailover=1
> to use the multipathing feature.
> 
> We have 2 separate fabrics to connect to the storage and 2 HBAs in each machine.
> 
> Disk layout:
> 
> A           B   (controllers)
> sda        sdf
> sdb        sdg
> sdc mirror sdh -+
> sdd mirror sdi  |
> sde mirror sdj -+
>                 |
>                 +-- volume group
> 
> Also, we do not have much write thruput to the disks.
> I use reiserfs because I could resize it w/o remounting.
> 
> Austin Gonyou has written something about write cache mirroring - what is that?
> Necessary?
> Howto?
> I use LVM1
> 
> Am I in danger?
> 
> Thanks
> 
> Thomas
> 
> -- 
> Thomas Meller
> mailto: thomas.meller@t-systems.ch
> mailto: thomas.meller@gmx.net
> ----
> ...Our continuing mission: To seek out knowledge of C, to explore strange
> UNIX commands, and to boldly code where no man page has ever been seen
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

-- 
Franc Carter     Ph:61-2-9236-9127      Fax: 61-2-9321-5988
Systems Manager, SIRCA Ltd              http://www.sirca.org.au/

DISCLAIMER: The contents of this email, inclusive of attachments, may
be legally privileged and confidential. Any unauthorised use of the
contents is expressly prohibited. If you have received this message in
error or are not the intended recipient, you should destroy the email
message along with any attachment(s).  Unintended recipients of this
email are prohibited from retaining, disclosing, distributing or using
any information contained herein.  This email is also subject to
copyright. No part of it should be reproduced, adapted or transmitted
without the written consent of the copyright owner.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] SAN Storage/mirror/lvm
@ 2004-08-09 17:19 Cott Lang
  2004-08-12 14:08 ` Thomas Meller
  0 siblings, 1 reply; 4+ messages in thread
From: Cott Lang @ 2004-08-09 17:19 UTC (permalink / raw)
  To: linux-lvm

>I setup a similar system on a HDS-9570 but with only one
>card in each machine - 'assuming' that the failover in the
>driver would cope if a controller on the HDS failed (as
>opposed to a QLogic failure)
>
>This worked fine if I disconncected a path in the fabric but
>when we took a controller of line the failover failed.
>
>Since then I have used md multipath which has tested fine

I have a similar issue, two storage processors, two HBAs, and
redundant paths between them.

On the host, I see 4 SCSI devices, two valid, two invalid - I'm 
guessing this is caused because only one storage processor exports
a particular LUN, so half the paths are invalid until a storage
processor fails over.

sdd - SP A Path 1
sde - SP B Path 1
sdf - SP A Path 2
sdg - SP B Path 2

I have a LUN currently on SP B, so I can use md multipath configured 
with /dev/sde and /dev/sdg, and that seems to work.

However, two things:

1) How do I set this up to handle a failover to SP A?
2) Multipath only seems to work in failover mode, is there a way to
load balance across the two paths?

Anyone have any hints?  :)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] SAN Storage/mirror/lvm
  2004-08-09 17:19 [linux-lvm] SAN Storage/mirror/lvm Cott Lang
@ 2004-08-12 14:08 ` Thomas Meller
  0 siblings, 0 replies; 4+ messages in thread
From: Thomas Meller @ 2004-08-12 14:08 UTC (permalink / raw)
  To: LVM general discussion and development

Cott Lang wrote:

> On the host, I see 4 SCSI devices, two valid, two invalid - I'm
> guessing this is caused because only one storage processor exports
> a particular LUN, so half the paths are invalid until a storage
> processor fails over.
> 
> sdd - SP A Path 1
> sde - SP B Path 1
> sdf - SP A Path 2
> sdg - SP B Path 2
> 
> I have a LUN currently on SP B, so I can use md multipath configured
> with /dev/sde and /dev/sdg, and that seems to work.
> 
> However, two things:
> 
> 1) How do I set this up to handle a failover to SP A?
> 2) Multipath only seems to work in failover mode, is there a way to
> load balance across the two paths?
> 
> Anyone have any hints?  :)

Hello Cott,

your questions and the phenomena you see are likely to be specific to your hardware. I tried
Emulex and QLogic HBA and found very different behaviour.

Multipathing is handled by the storage systems differently from dumb disks. Load balancing
over several physical paths is therefore a complicated task. Most vendors do not really
support it. Some systems suffer from a Host switching paths. Unless you do not use a
supported (read: expensive) Software you should keep your hands off.

Even failover has a 'mechanical' quality: you can hear the door's angles squiek when it
happens.
I found my clusters diving after teardown of a storage's HBA.

Failover while switching the physical path, i.e. switching the HBA takes at least 45 second
on a (tuned) qlogic. Meanwhile the whole computer freezes. The driver seems to switch off
interrupts or something.

Have fun finding your solution.

Thomas

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2004-08-12 14:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-09 17:19 [linux-lvm] SAN Storage/mirror/lvm Cott Lang
2004-08-12 14:08 ` Thomas Meller
  -- strict thread matches above, loose matches on Subject: below --
2004-07-23  8:52 Thomas Meller
2004-07-23 11:04 ` Franc Carter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox