linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Anton Ekermans <antone@true.co.za>
To: Stan Hoeppner <stan@hardwarefreak.com>, linux-raid@vger.kernel.org
Subject: Re: md with shared disks
Date: Thu, 13 Nov 2014 15:14:16 +0200	[thread overview]
Message-ID: <5464AEA8.3010106@true.co.za> (raw)
In-Reply-To: <546138B5.7020101@hardwarefreak.com>

Thank you very much for your clear response.
The purpose of this hardware is to primarily host ample VM storage for 
the 2 nodes itself and 3 other i7 PC/servers.
The HA was hoped to be achieved as active/active with both nodes sharing 
the same disks and non-cluster servers(i7) having multi-path to these 
two nodes. This is advertised as HA active/active in storage software 
such as Nexenta using RSF-1. However upon closer inspection, their 
active/active means both nodes share some data and the other can take 
over. So for me, in essence it is "active/passive + passive/active" and 
not truly "active/active". We will try to config this way to get quasi 
active/active for best performance with kind-of high-availability. Seems 
the shared disks is not the problem, but combining them on a cluster is.

Thank you again

Best regards
Untitled Document

Anton Ekermans

> It's not possible to do what you mention as md is not cluster aware.  It
> will break, badly.  What most people do in such cases in create two md
> arrays, one controlled by each host, and mirror them with DRBD, then put
> OCFS/GFS atop DRBD.  You lose half your capacity doing this, but it's
> the only way to do it and have all disks active.  Of course you lose
> half your bandwidth as well.  This is a high availability solution, not
> high performance.
>
> You bought this hardware to do something.  And that something wasn't
> simply making two hosts in one box use all the disks in the box.  What
> is the workload you plan to run on this hardware?  The workload dictates
> the needed hardware architecture, not the other way around.  If you want
> high availability this hardware will work using the stack architecture
> above, and work well.  If you need high performance shared filesystem
> access between both nodes you need an external SAS/FC RAID array and a
> cluster FS.  In either case you're using a cluster FS which means high
> file throughput but low metadata throughgput.
>
> If it's high performance you need, an option is to submit patches to
> make md cluster aware.  Another is the LSI clustering RAID controller
> kit for internal drives.  Don't know anything about it other than it is
> available and apparently works with RHEL and SUSE.  Seems suitable for
> what you express as your need.
>
> http://www.lsi.com/products/shared-das/pages/syncro-cs-9271-8i.aspx#tab/tab2
>
>
> Cheers,
> Stan


  reply	other threads:[~2014-11-13 13:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-09  8:30 md with shared disks Anton Ekermans
2014-11-10 16:40 ` Ethan Wilson
2014-11-10 22:14 ` Stan Hoeppner
2014-11-13 13:14   ` Anton Ekermans [this message]
2014-11-13 20:56     ` Stan Hoeppner
2014-11-13 22:53       ` Ethan Wilson
2014-11-14  0:07         ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5464AEA8.3010106@true.co.za \
    --to=antone@true.co.za \
    --cc=linux-raid@vger.kernel.org \
    --cc=stan@hardwarefreak.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).