linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Extendible RAID10
@ 2011-03-30 11:57 David Brown
  2011-03-31 17:42 ` Keld Jørn Simonsen
  0 siblings, 1 reply; 3+ messages in thread
From: David Brown @ 2011-03-30 11:57 UTC (permalink / raw)
  To: linux-raid

RAID10 with far layout is a very nice raid level - it gives you read 
speed like RAID0, write speed no slower than other RAID1 mirrors, and of 
course you have the mirror redundancy.

But it is not extendible - once you have made your layout, you are stuck 
with it.  There is no way (at the moment) to migrate over to larger drives.

As far as I can see, you can grow RAID1 sets to larger disks.  But you 
can't grow RAID0 sets.  As far as I can see, there is some inconsistency 
in the mdadm manual pages as to whether or not you can grow the size of 
a RAID4 array.  If it is possible to grow a RAID4, then it should be 
possible to use a degraded RAID4 (with a missing parity disk) as a RAID0.


I'm planning a new server in the near future, and I think I'll get a 
reasonable balance of price, performance, capacity and redundancy using 
a 3-drive RAID10,f2 setup (with a small boot partition on each drive, 
all three as a RAID1, so that grub will work properly).  On the main md 
device I then have an LVM physical volume, with logical partitions for 
different virtual machines or other data areas.  I've used such an 
arrangement before, and been happy with it.

But as an alternative solution that is expandable, I am considering 
using LVM to do the striping.  Ignoring the boot partition for 
simplicity, I would partition each disk into two equal parts - sda1, 
sda2, sdb1, sdb2, sdc1 and sdc2.  Then I would form a set of RAID1 
devices - md1 = sda1 + sdb2, md2 = sdb1 + sdc2, md3 = sdc1 + sda2.  I 
would make an lvm physical volume on each of these md devices, and put 
all those physical volumes into a single volume group.  Whenever I make 
a new logical volume, I specify that it should have three stripes.

If I then want to replace the disks with larger devices, it is possible 
to add a new disk, partition it into two larger partitions, add these 
partitions to two of the existing raids, sync, fail then remove the 
now-redundant drive.  After three rounds, the RAID1 sets can then be 
grown to match the new partition sizes.  Then the lvm physical volumes 
can be grown to match the new raid sizes.


Any opinions?  Have I missed anything here, perhaps some issues that 
will make this arrangement slower or less efficient than a normal 
RAID10,f2 with lvm on top?



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-03-31 19:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-30 11:57 Extendible RAID10 David Brown
2011-03-31 17:42 ` Keld Jørn Simonsen
2011-03-31 19:10   ` David Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).