linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possibly silly question (raid failover)
@ 2011-11-01  0:38 Miles Fidelman
  2011-11-01  9:14 ` David Brown
                   ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Miles Fidelman @ 2011-11-01  0:38 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hi Folks,

I've been exploring various ways to build a "poor man's high 
availability cluster."  Currently I'm running two nodes, using raid on 
each box, running DRBD across the boxes, and running Xen virtual 
machines on top of that.

I now have two brand new servers - for a total of four nodes - each with 
four large drives, and four gigE ports.

Between the configuration of the systems, and rack space limitations, 
I'm trying to use each server for both storage and processing - and been 
looking at various options for building a cluster file system across all 
16 drives, that supports VM migration/failover across all for nodes, and 
that's resistant to both single-drive failures, and to losing an entire 
server (and it's 4 drives), and maybe even losing two servers (8 drives).

The approach that looks most interesting is Sheepdog - but it's both 
tied to KVM rather than Xen, and a bit immature.

But it lead me to wonder if something like this might make sense:
- mount each drive using AoE
- run md RAID 10 across all 16 drives one one node
- mount the resulting md device using AoE
- if the node running the md device fails, use pacemaker/crm to 
auto-start an md device on another node, re-assemble and republish the array
- resulting in a 16-drive raid10 array that's accessible from all nodes

Or is this just silly and/or wrongheaded?

Miles Fidelman

-- 
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2011-11-02 13:17 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-01  0:38 possibly silly question (raid failover) Miles Fidelman
2011-11-01  9:14 ` David Brown
2011-11-01 13:05   ` Miles Fidelman
2011-11-01 13:37     ` John Robinson
2011-11-01 14:36       ` David Brown
2011-11-01 20:13         ` Miles Fidelman
2011-11-01 21:20           ` Robin Hill
2011-11-01 21:32             ` Miles Fidelman
2011-11-01 21:50               ` Robin Hill
2011-11-01 22:35                 ` Miles Fidelman
2011-11-01 22:00               ` David Brown
2011-11-01 22:58                 ` Miles Fidelman
2011-11-02 10:36                   ` David Brown
2011-11-01 22:15           ` keld
2011-11-01 22:25             ` NeilBrown
2011-11-01 22:38               ` Miles Fidelman
2011-11-02  1:40                 ` keld
2011-11-02  1:37               ` keld
2011-11-02  1:48                 ` NeilBrown
2011-11-02  7:02                   ` keld
2011-11-02  9:20                     ` Jonathan Tripathy
2011-11-02 11:27                     ` David Brown
2011-11-01  9:26 ` Johannes Truschnigg
2011-11-01 13:02   ` Miles Fidelman
2011-11-01 13:33     ` John Robinson
2011-11-02  6:41 ` Stan Hoeppner
2011-11-02 13:17   ` Miles Fidelman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).