linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* possibly silly configuration question
@ 2012-12-27  4:16 Miles Fidelman
  2012-12-27  4:43 ` Adam Goryachev
  2012-12-27 21:11 ` Roy Sigurd Karlsbakk
  0 siblings, 2 replies; 6+ messages in thread
From: Miles Fidelman @ 2012-12-27  4:16 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hi Folks,

I find myself having four servers, each with 4 large disks, that I'm 
trying to assemble into a high-availability cluster.  (Note: I've got 4 
gigE ports on each box, 2 set aside for outside access, 2 for inter-node 
clustering)

Now it's easy enough to RAID disks on each server, and/or mirror disks 
pair-wise with DRBD, but DRBD doesn't work as well with >2 servers.

No what I really should do is separate storage nodes from compute nodes 
- but I'm limited by rack space and chassis configuration of the 
hardware I've got, and I've been thinking through various configurations 
to make use of the resources at hand.

One option is to put all the drives into one large pool managed by 
gluster - but I expect that would result in some serious performance 
hits (and gluster's replicated/distributed mode is fairly new).

It's late at night and a thought occurred to me that is probably 
wrongheaded (or at least silly) - but maybe I'm too tired to see any 
obvious problems.  So I'd welcome 2nd (and 3rd) opinions.

The basic notion:
- mount all 16 drives as network block devices via iSCSI or AoE
- build 4 RAID10 volumes - each volume consisting of one drive from each 
server
- run LVM on top of the RAID volumes
- then use NFS or maybe OCFS2 to make volumes available across nodes
- of course md would be running on only one node (for each array), so if 
a node goes down, use pacemaker to startup md on another node, 
reassemble the array, and remount everything

Does this make sense, or is it totally crazy?

Thanks much,

Miles Fidelman

-- 
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-12-27 21:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-27  4:16 possibly silly configuration question Miles Fidelman
2012-12-27  4:43 ` Adam Goryachev
2012-12-27 16:02   ` Miles Fidelman
2012-12-27 16:21     ` Adam Goryachev
2012-12-27 16:44       ` Miles Fidelman
2012-12-27 21:11 ` Roy Sigurd Karlsbakk

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).