From: Alex Lilley <alex@redwax.co.uk>
To: Twigathy <twigathy@gmail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid 5 --grow to fewer, larger drivers
Date: Mon, 24 Nov 2008 18:28:16 +0000 [thread overview]
Message-ID: <492AF240.9020608@redwax.co.uk> (raw)
In-Reply-To: <1f0f1a960811240621j619701aboc32c2da9765cd7a1@mail.gmail.com>
That is an interesting work around T, though it does have its own
issues, namely the need to take the data off-line for the time of the
copy over the network besides the hassle.
I would be interested to know why there would be any greater
difficulty/risk in growing to fewer, larger disks than simply growing to
more disks of the same size. The problem is that if you have a number
of disks and need more space, you add another disk of the same size and
lots of disks will eventually bite you on the bum, so being able to
consolidate in the way I set out would be a real bonus and nice and
straight forward (or maybe not!?)
Regarding the risk factor during any array resync, I should hope we all
have a precious data backed up anyway!
Thanks for you £0.02 though :-)
Alex
Twigathy wrote:
> In that situation, I think I'd be happier to hook all the disks up (If
> not on the same machine, on two machines on the same [gigE] network,
> make a new raid5 on the new set of disks, rsync stuff to the new array
> and then retire the old array (Swapping in the array you just
> created)... messing around resizing disks for this job just sounds
> like mess and risk to me!
>
> Just my £0.02 ;-)
>
> T
>
> 2008/11/24 Alex Lilley <alex@redwax.co.uk>:
>
>> Senario: raid5 with 4 x 120gb.
>>
>> aim: raid5 with 3 x 250gb
>>
>> 3 x 120gb disks replaced with 3 x 250gb, array has been rebuilt/resync'd at
>> original size. Can we remove the remaining 120gb drive and reshape the array
>> over the remaining 3 drives using all the new space.
>>
>> I guessed --grow --raid-devices=3 --size-max would work but returns "can
>> change at most one of size, raiddisks, bitmap and layout"
>>
>> I then --failed and --remove the remaining 120gb drive and tried --grow
>> --raid-devices=3 but receive "Cannot reduce number of data disks (yet)".
>>
>> I am therefore slightly stumped!
>>
>> Is this something that is actually possible or something that is planned for
>> because as the size of disk drives multiplies and the desire to keep tabs of
>> power usage increases it is likely that we will want to reduce the number of
>> disk, besides the obvious increased risk of failure introduced by having a
>> greater number of drives.
>>
>> Regards
>>
>> Alex
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
> N�����r��y���b�X��ǧv�^�){.n�+����{�����{ay�\x1dʇڙ�,j\a��f���h���z�\x1e�w���\f���j:+v���w�j�m����\a����zZ+��ݢj"��!tml=
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2008-11-24 18:28 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-11-24 12:56 Raid 5 --grow to fewer, larger drivers Alex Lilley
2008-11-24 14:21 ` Twigathy
2008-11-24 18:28 ` Alex Lilley [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=492AF240.9020608@redwax.co.uk \
--to=alex@redwax.co.uk \
--cc=linux-raid@vger.kernel.org \
--cc=twigathy@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).