From: "Keld Jørn Simonsen" <keld@keldix.com>
To: David Brown <david@westcontrol.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Extendible RAID10
Date: Thu, 31 Mar 2011 19:42:30 +0200 [thread overview]
Message-ID: <20110331174230.GA10981@www2.open-std.org> (raw)
In-Reply-To: <imv5sm$tuj$1@dough.gmane.org>
On Wed, Mar 30, 2011 at 01:57:36PM +0200, David Brown wrote:
> RAID10 with far layout is a very nice raid level - it gives you read
> speed like RAID0, write speed no slower than other RAID1 mirrors, and of
> course you have the mirror redundancy.
>
> But it is not extendible - once you have made your layout, you are stuck
> with it. There is no way (at the moment) to migrate over to larger drives.
>
> As far as I can see, you can grow RAID1 sets to larger disks. But you
> can't grow RAID0 sets. As far as I can see, there is some inconsistency
> in the mdadm manual pages as to whether or not you can grow the size of
> a RAID4 array. If it is possible to grow a RAID4, then it should be
> possible to use a degraded RAID4 (with a missing parity disk) as a RAID0.
>
>
> I'm planning a new server in the near future, and I think I'll get a
> reasonable balance of price, performance, capacity and redundancy using
> a 3-drive RAID10,f2 setup (with a small boot partition on each drive,
> all three as a RAID1, so that grub will work properly). On the main md
> device I then have an LVM physical volume, with logical partitions for
> different virtual machines or other data areas. I've used such an
> arrangement before, and been happy with it.
>
> But as an alternative solution that is expandable, I am considering
> using LVM to do the striping. Ignoring the boot partition for
> simplicity, I would partition each disk into two equal parts - sda1,
> sda2, sdb1, sdb2, sdc1 and sdc2. Then I would form a set of RAID1
> devices - md1 = sda1 + sdb2, md2 = sdb1 + sdc2, md3 = sdc1 + sda2. I
> would make an lvm physical volume on each of these md devices, and put
> all those physical volumes into a single volume group. Whenever I make
> a new logical volume, I specify that it should have three stripes.
>
> If I then want to replace the disks with larger devices, it is possible
> to add a new disk, partition it into two larger partitions, add these
> partitions to two of the existing raids, sync, fail then remove the
> now-redundant drive. After three rounds, the RAID1 sets can then be
> grown to match the new partition sizes. Then the lvm physical volumes
> can be grown to match the new raid sizes.
>
>
> Any opinions? Have I missed anything here, perhaps some issues that
> will make this arrangement slower or less efficient than a normal
> RAID10,f2 with lvm on top?
I am not sure RAID10,f2 works well with LVM. I believe I have seen
reports to the contrary.
It should be possible to extend RAID10 arrays with more disks. I do not
think it is so difficult. But I think neil does not have it on his wish
list.
best regards
keld
next prev parent reply other threads:[~2011-03-31 17:42 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-30 11:57 Extendible RAID10 David Brown
2011-03-31 17:42 ` Keld Jørn Simonsen [this message]
2011-03-31 19:10 ` David Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110331174230.GA10981@www2.open-std.org \
--to=keld@keldix.com \
--cc=david@westcontrol.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).