From: Michael Clark <michael@metaparadigm.com>
To: Steven Dake <sdake@mvista.com>
Cc: Brian Jackson <brian-kernel-list@mdrx.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: md on shared storage
Date: Wed, 13 Nov 2002 11:00:22 +0800 [thread overview]
Message-ID: <3DD1C046.3010603@metaparadigm.com> (raw)
In-Reply-To: 3DD1A899.8080800@mvista.com
On 11/13/02 09:19, Steven Dake wrote:
> Brian,
>
> The RAID driver does indeed work with shared storage, if you don't have
> RAID autostart set as the partition type. If you do, each host will try
> to rebuild the RAID array resulting in really bad magic.
>
> I posted patches to solve this problem long ago to this list and
> linux-raid, but Neil Brown (md maintainer) rejected them saying that
> access to a raid volume should be controlled by user space, not by the
> kernel. Everyone is entitled to their opinions I guess. :)
>
> The patch worked by locking RAID volumes to either a FibreChannel host
> WWN (qlogic only) or scsi host id. This ensured that if a raid volume
> was started, it could ONLY be started on the host that created it. This
> worked for the autostart path as well as the start path via IOCTL.
>
> I also modified mdadm to handle takeover for failed nodes to takeover
> RAID arrays.
>
> I'm extending this type of support into LVM volume groups as we speak.
> If you would like to see the patch when I'm done mail me and I'll send
> it out. This only applies to 2.4.19.
I'm interested in finding what magic is required to get a stable
setup with qlogic drivers and LVM. I have tested many kernel combinations,
vendor kernels, stock, -aa and variety of different qlogic drivers
inclusing the one with the alleged stack hog fixes and they all ooops
when using LVM (can take up to 10 days of production load). Removing
LVM 45 days ago and now I have 45 days uptime on these boxes.
I'm currently building a test setup to try and excercise this problem
as all my other boxes with qlogic cards are production and can't be
played with. I really miss having volume management and a SAN setup
is really where you need it the most.
~mc
next prev parent reply other threads:[~2002-11-13 2:53 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-11-13 0:25 md on shared storage Brian Jackson
2002-11-13 1:19 ` Steven Dake
2002-11-13 3:00 ` Michael Clark [this message]
2002-11-13 3:15 ` Brian Jackson
2002-11-13 13:22 ` Michael Clark
2002-11-13 2:51 ` Michael Clark
2002-11-13 11:46 ` Lars Marowsky-Bree
2002-11-13 17:17 ` Steven Dake
2002-11-13 17:25 ` Joel Becker
2002-11-13 17:56 ` Steven Dake
2002-11-13 17:24 ` Joel Becker
2002-11-13 15:35 ` Eric Weigle
2002-11-13 18:33 ` Brian Jackson
-- strict thread matches above, loose matches on Subject: below --
2002-11-13 19:25 Eric Weigle
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3DD1C046.3010603@metaparadigm.com \
--to=michael@metaparadigm.com \
--cc=brian-kernel-list@mdrx.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sdake@mvista.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox