linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: martin f krafft <madduck@madduck.net>
Cc: linux-raid mailing list <linux-raid@vger.kernel.org>
Subject: Re: Extending a 4×3Tb RAID10
Date: Wed, 10 Jul 2013 07:24:39 +1000	[thread overview]
Message-ID: <20130710072439.02e4288e@notabene.brown> (raw)
In-Reply-To: <20130709182222.GA3952@fishbowl>

[-- Attachment #1: Type: text/plain, Size: 1928 bytes --]

On Tue, 9 Jul 2013 20:22:22 +0200 martin f krafft <madduck@madduck.net> wrote:

> Hello,
> 
> We have a RAID10 across 4 3TB drives (metadata version 1.2,
> 2 offset-copies, bitmaps), with LVM on top. We are running out of
> space, but we don't really want to invest in 4 new 4TB drives right
> now.
> 
> Is it possible to replace only two of the 3TB drives with 4TB drives
> and get an extra terrabyte into the array somehow?
> 
> Anything I tried so far on a test system didn't work. I can add the
> new devices to the RAID, but if I try to grow the array to the new
> size, I get:
> 
>   mdadm: component size of /dev/md2 unchanged at X
> 
> Do I have to fail two drives, create a new RAID10, add a new LVM PV
> on it, and pvmove the data over, all the while hoping that none of
> the four disks die — even though there is a backup (the two failed
> drives), that's a one-shot backup and that is too risky.
> 
> That said, it isn't even possible to have a RAID10 across 2x2 pairs
> of disks with different sizes, is it? Why not? I'd really rather
> avoid pulling two RAID1s together with LVM, although I guess that is
> essentially the same as RAID10.

Hi Martin.

Now you cannot have any array-with-redundancy using different amounts of
different devices.  All devices must contribute equally to the array (this
doesn't apply to RAID0 or Linear, only RAID1 and higher numbers).

This is because the definition of a "spare" device would become more
complicated.  You would need two different sized spares, or have to choose
whether it is OK to use a "big" spare to replace a "small" device, or maybe
have an array with a spare and a failed device and not be able to rebuild
because the spare isn't big enough.

It really is much easier to just say "no".

To get the sort of flexibility you want you would need to have two separate
RAID1s which are combined with LVM.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

      reply	other threads:[~2013-07-09 21:24 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-09 18:22 Extending a 4×3Tb RAID10 martin f krafft
2013-07-09 21:24 ` NeilBrown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130710072439.02e4288e@notabene.brown \
    --to=neilb@suse.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=madduck@madduck.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).