From: Paul Davidson <Paul.Davidson@anu.edu.au>
To: Neil Brown <neilb@suse.de>
Cc: Bill Davidsen <davidsen@tmr.com>, linux-raid@vger.kernel.org
Subject: Re: Is shrinking raid5 possible?
Date: Fri, 23 Jun 2006 12:17:19 +1000 [thread overview]
Message-ID: <449B4F2F.40707@anu.edu.au> (raw)
In-Reply-To: <17563.17224.85968.572754@cse.unsw.edu.au>
Neil Brown wrote:
> In short, reducing a raid5 to a particular size isn't something that
> really makes sense to me. Reducing the amount of each device that is
> used does - though I would much more expect people to want to increase
> that size.
>
> If Paul really has a reason to reduce the array to a particular size
> then fine. I'm mildly curious, but it's his business and I'm happy
> for mdadm to support it, though indirectly. But I strongly suspect
> that most people who want to resize their array will be thinking in
> terms of the amount of each device that is used, so that is how mdadm
> works.
By way of explanation, I was doing exactly what you thought - I had a
single ext3 filesystem on a raid5 device, and wanted to split it into
two filesystems. I'm not using LVM since it appears to affect read
performance quite severely. I guess there may be other ways of doing
this but this seemed the most straightforward.
Incidentally, it wasn't clear to me what to do after shrinking the
raid5 device. My initial try at taking it offline and repartitioning
all the disks at once didn't work - I think because the superblocks
became 'lost'. I eventually realized I should go through a
fail-remove-repartition-add-recover cycle for each disk in turn with
the array online. It took a long time but worked in the end.
Would repartitioning them all at once have worked if I had chosen to
have the superblocks at the beginning of the partitions (v1.1 or 1.2
superblocks)?
As for Bill's comment about the mdadm interface, what probably would
have helped me the most is if the man page had had "from each drive"
in bold, flashing and preferably two-foot tall letters :-).
A more practical suggestion is if the error message had not been
"No space left on device"
but something like
"Maximum space available on each device is xxxxxxxx"
then I would have quickly realized my mistake. As Neil points out,
you have to 'do the math' anyway when partitioning.
Cheers,
Paul
next prev parent reply other threads:[~2006-06-23 2:17 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-06-19 2:30 Is shrinking raid5 possible? Paul Davidson
2006-06-19 4:39 ` Neil Brown
2006-06-19 5:07 ` Paul Davidson
2006-06-23 0:49 ` Bill Davidsen
2006-06-23 1:26 ` Neil Brown
2006-06-23 2:17 ` Paul Davidson [this message]
2006-06-23 8:34 ` Henrik Holst
2006-06-23 18:16 ` Christian Pernegger
2006-06-26 7:41 ` Neil Brown
2006-06-26 11:33 ` Christian Pernegger
2006-11-22 20:40 ` Henrik Holst
2009-11-06 13:17 ` Thomas Arthur Oehser
2009-11-06 15:04 ` Asdo
2009-11-06 15:26 ` Thomas Arthur Oehser
2009-11-06 17:00 ` Goswin von Brederlow
2009-11-06 18:32 ` John Robinson
2009-11-06 18:38 ` Thomas Arthur Oehser
2009-11-06 19:30 ` Goswin von Brederlow
2009-11-06 19:38 ` John Robinson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=449B4F2F.40707@anu.edu.au \
--to=paul.davidson@anu.edu.au \
--cc=davidsen@tmr.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).