linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Anton Ekermans <antone@true.co.za>, linux-raid@vger.kernel.org
Subject: Re: md with shared disks
Date: Mon, 10 Nov 2014 16:14:13 -0600	[thread overview]
Message-ID: <546138B5.7020101@hardwarefreak.com> (raw)
In-Reply-To: <545F2630.8090307@true.co.za>

On 11/09/2014 02:30 AM, Anton Ekermans wrote:
> Good day raiders,
> I have a question on md that I cannot find (up to date) answer to.
> We use SuperMicro server with 16 shared disks on a shared backplane
> between two motherboards, running up to date CentOS7.
> If I create an array on one node, the other node can detect it. I put
> GFS2 on top of the array so both system can share the filesystem, but I
> want to know if md raid is safe to be used in this way with possibly 2
> active/active nodes changing the metadata at the same time. I've
> disabled raid-check cron job on one node so they don't both resync the
> drives weekly, but I suspect there's a lot more to it than that.
> 
> If it's not possible, then alternatively some advice on strategy to have
> a large active/active shared disk/filesystem would also be welcome.

It's not possible to do what you mention as md is not cluster aware.  It
will break, badly.  What most people do in such cases in create two md
arrays, one controlled by each host, and mirror them with DRBD, then put
OCFS/GFS atop DRBD.  You lose half your capacity doing this, but it's
the only way to do it and have all disks active.  Of course you lose
half your bandwidth as well.  This is a high availability solution, not
high performance.

You bought this hardware to do something.  And that something wasn't
simply making two hosts in one box use all the disks in the box.  What
is the workload you plan to run on this hardware?  The workload dictates
the needed hardware architecture, not the other way around.  If you want
high availability this hardware will work using the stack architecture
above, and work well.  If you need high performance shared filesystem
access between both nodes you need an external SAS/FC RAID array and a
cluster FS.  In either case you're using a cluster FS which means high
file throughput but low metadata throughgput.

If it's high performance you need, an option is to submit patches to
make md cluster aware.  Another is the LSI clustering RAID controller
kit for internal drives.  Don't know anything about it other than it is
available and apparently works with RHEL and SUSE.  Seems suitable for
what you express as your need.

http://www.lsi.com/products/shared-das/pages/syncro-cs-9271-8i.aspx#tab/tab2


Cheers,
Stan

  parent reply	other threads:[~2014-11-10 22:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-09  8:30 md with shared disks Anton Ekermans
2014-11-10 16:40 ` Ethan Wilson
2014-11-10 22:14 ` Stan Hoeppner [this message]
2014-11-13 13:14   ` Anton Ekermans
2014-11-13 20:56     ` Stan Hoeppner
2014-11-13 22:53       ` Ethan Wilson
2014-11-14  0:07         ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=546138B5.7020101@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=antone@true.co.za \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).