From: Neil Brown <neilb@suse.de>
To: "Lars Täuber" <taeuber@bbaw.de>
Cc: Anton Altaparmakov <aia21@cam.ac.uk>, linux-raid@vger.kernel.org
Subject: Re: How to grow RAID1 mirror on top of LVM?
Date: Fri, 2 May 2008 13:14:23 +1000 [thread overview]
Message-ID: <18458.34575.416075.798546@notabene.brown> (raw)
In-Reply-To: message from Lars Tauber on Monday April 28
On Monday April 28, taeuber@bbaw.de wrote:
> Hallo Neil,
>
> Neil Brown <neilb@suse.de> schrieb:
> > On Thursday March 13, aia21@cam.ac.uk wrote:
> > >
> > > Is there a better way to do this? I am hoping someone will tell me to
> > > use option blah to utility foo that will do this for me without having
> > > to break the mirror twice and resync each time. (-;
> >
> > Sorry, but no. This mode of operation was never envisaged for md.
> > I would always put the md/raid1 devices below the LVM.
>
> could you write in some short words what in the design prohibits us to grow a raid1 on a grown lvm?
By default, the metadata for an md array is stored near the end of
each device. If you make the device larger, you lose the metadata.
This could be address for on-line resizing by having some protocol
whereby the LVM layer tells whoever is using it that it is about to
become larger, so that the metadata can be updated and moved, but that
is probably more hassle than it is worth.
If you use version 1.1 or 1.2 metadata, the metadata is stored at the
start of the device, so it doesn't get lost. However the metadata has
recorded in it the amount of usable space on the device. When you
make the device bigger you would need to update this number.
There is currently no way to update this for an active array.
You can stop the array, and the re-assemble it with
--update=devicesize
this will update the field in the metadata which records the size of
each device. You will then be able to grow the array to make use of
all the space.
It might not be to hard to make it possible to tell md that devices
have grown.... maybe one day :-)
NeilBrown
> We wanted to do the same:
>
>
> mutliple RAID1 (aoe targets)
> on top of
> multiple LVMs
> on top of
> gigantic RAID6
>
>
> Thanks
> Lars
> --
> Informationstechnologie
> Berlin-Brandenburgische Akademie der Wissenschaften
> Jägerstrasse 22-23 10117 Berlin
> Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2008-05-02 3:14 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-03-13 11:21 How to grow RAID1 mirror on top of LVM? Anton Altaparmakov
2008-03-25 5:36 ` Neil Brown
2008-03-25 8:00 ` Anton Altaparmakov
2008-04-28 14:25 ` Lars Täuber
2008-04-28 15:09 ` David Lethe
2008-04-29 7:55 ` Lars Täuber
2008-04-29 13:21 ` David Lethe
2008-05-02 3:14 ` Neil Brown [this message]
2008-05-02 7:23 ` Lars Täuber
2008-05-02 15:06 ` Russ Hammer
2008-05-04 11:20 ` Neil Brown
2008-05-05 7:10 ` Lars Täuber
2008-05-06 12:34 ` Russ Hammer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18458.34575.416075.798546@notabene.brown \
--to=neilb@suse.de \
--cc=aia21@cam.ac.uk \
--cc=linux-raid@vger.kernel.org \
--cc=taeuber@bbaw.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).