linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Miles Fidelman <mfidelman@meetinghouse.net>
To: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: possibly silly question (raid failover)
Date: Mon, 31 Oct 2011 20:38:16 -0400	[thread overview]
Message-ID: <4EAF3F78.5060900@meetinghouse.net> (raw)

Hi Folks,

I've been exploring various ways to build a "poor man's high 
availability cluster."  Currently I'm running two nodes, using raid on 
each box, running DRBD across the boxes, and running Xen virtual 
machines on top of that.

I now have two brand new servers - for a total of four nodes - each with 
four large drives, and four gigE ports.

Between the configuration of the systems, and rack space limitations, 
I'm trying to use each server for both storage and processing - and been 
looking at various options for building a cluster file system across all 
16 drives, that supports VM migration/failover across all for nodes, and 
that's resistant to both single-drive failures, and to losing an entire 
server (and it's 4 drives), and maybe even losing two servers (8 drives).

The approach that looks most interesting is Sheepdog - but it's both 
tied to KVM rather than Xen, and a bit immature.

But it lead me to wonder if something like this might make sense:
- mount each drive using AoE
- run md RAID 10 across all 16 drives one one node
- mount the resulting md device using AoE
- if the node running the md device fails, use pacemaker/crm to 
auto-start an md device on another node, re-assemble and republish the array
- resulting in a 16-drive raid10 array that's accessible from all nodes

Or is this just silly and/or wrongheaded?

Miles Fidelman

-- 
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



             reply	other threads:[~2011-11-01  0:38 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-01  0:38 Miles Fidelman [this message]
2011-11-01  9:14 ` possibly silly question (raid failover) David Brown
2011-11-01 13:05   ` Miles Fidelman
2011-11-01 13:37     ` John Robinson
2011-11-01 14:36       ` David Brown
2011-11-01 20:13         ` Miles Fidelman
2011-11-01 21:20           ` Robin Hill
2011-11-01 21:32             ` Miles Fidelman
2011-11-01 21:50               ` Robin Hill
2011-11-01 22:35                 ` Miles Fidelman
2011-11-01 22:00               ` David Brown
2011-11-01 22:58                 ` Miles Fidelman
2011-11-02 10:36                   ` David Brown
2011-11-01 22:15           ` keld
2011-11-01 22:25             ` NeilBrown
2011-11-01 22:38               ` Miles Fidelman
2011-11-02  1:40                 ` keld
2011-11-02  1:37               ` keld
2011-11-02  1:48                 ` NeilBrown
2011-11-02  7:02                   ` keld
2011-11-02  9:20                     ` Jonathan Tripathy
2011-11-02 11:27                     ` David Brown
2011-11-01  9:26 ` Johannes Truschnigg
2011-11-01 13:02   ` Miles Fidelman
2011-11-01 13:33     ` John Robinson
2011-11-02  6:41 ` Stan Hoeppner
2011-11-02 13:17   ` Miles Fidelman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EAF3F78.5060900@meetinghouse.net \
    --to=mfidelman@meetinghouse.net \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).