* How to grow RAID1 mirror on top of LVM?
@ 2008-03-13 11:21 Anton Altaparmakov
2008-03-25 5:36 ` Neil Brown
0 siblings, 1 reply; 13+ messages in thread
From: Anton Altaparmakov @ 2008-03-13 11:21 UTC (permalink / raw)
To: linux-raid
Hi,
We have Software RAID1 (MD) mirrors on top of LVM volumes so that we
can grow the LVM volumes if we need one of the mirrors to grow and
then use mdadm --grow to grow the mirror followed by using xfs_growfs
to grow the xfs file system mounted on the mirror. Thus we can extend
the size of the mirrored file system without having to unmount it
which is great. This is what should happen in an ideal world anyway.
In reality it does not work. )-: This is perhaps because mdadm does
not realize the underlying devices have grown (using lvextend) and
therefore running:
mdadm --grow /dev/md3 --size max
Does not do anything.
And if I do:
mdadm /dev/md3 --fail /dev/vg/volume2
mdadm /dev/md3 --remove /dev/vg/volume2
mdadm /dev/md3 --add --write-mostly /dev/vg/volume2
This fails because MD complains that no superblocks are found on the
array or something like that. )-: Perhaps it has noticed that the
devices are bigger and now cannot find the MD super block as it is no
longer at the end of the device or something like that, not sure.
The only working way I have found to do what we want is to --fail & --
remove one device, then grow it with lvextend, then add it back to the
mirror. BUT this causes a full resync because it does not detect that
the device was part of the mirror before and no resync is thus
needed. Again, I assume because the MD superblock is now in the
middle of the volume instead of at the end.
I then have to wait for the resync to finish and then I have to remove
the other mirrored device with --fail & --remove, then lvextend that,
and then I can add it back to the mirror which AGAIN triggers a full
resync as before.
I then have to wait for the resync to finish and then I finally can
grow the mirror using:
mdadm --grow /dev/md3 --size max
And it works this time.
As a small shortcut, I can do the mdadm --grow after the first resync
is complete (see above), after breaking the mirror for the second
time, then the md3 device is grown with just one member device. Then
I add the extended other mirror target to the array with --add and
that then does a full resync on the already grown mirror.
So to grow a raid mirror takes ages because of the resync times when
you have large arrays!
Is there a better way to do this? I am hoping someone will tell me to
use option blah to utility foo that will do this for me without having
to break the mirror twice and resync each time. (-;
If not, please consider this a feature request for mdadm. (-: It
should have an option to detect that the underlying device has grown
and thus write a new superblock (or move the old one or whatever) at
the end of the newly grown device instead of complaining that it does
not exist. Or something! As it is, it is incredibly time consuming
and inefficient. )-:
Best regards,
Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-03-13 11:21 How to grow RAID1 mirror on top of LVM? Anton Altaparmakov
@ 2008-03-25 5:36 ` Neil Brown
2008-03-25 8:00 ` Anton Altaparmakov
2008-04-28 14:25 ` Lars Täuber
0 siblings, 2 replies; 13+ messages in thread
From: Neil Brown @ 2008-03-25 5:36 UTC (permalink / raw)
To: Anton Altaparmakov; +Cc: linux-raid
On Thursday March 13, aia21@cam.ac.uk wrote:
>
> Is there a better way to do this? I am hoping someone will tell me to
> use option blah to utility foo that will do this for me without having
> to break the mirror twice and resync each time. (-;
Sorry, but no. This mode of operation was never envisaged for md.
I would always put the md/raid1 devices below the LVM.
Every time you buy two drives, combine them into a RAID1, and add the
/dev/mdX as a PV for LVM. Then grow you LVM devices whenever you
like.
>
> If not, please consider this a feature request for mdadm. (-: It
> should have an option to detect that the underlying device has grown
> and thus write a new superblock (or move the old one or whatever) at
> the end of the newly grown device instead of complaining that it does
> not exist. Or something! As it is, it is incredibly time consuming
> and inefficient. )-:
I'll keep it in mind (Which it to say: I will save this in my 'mdadm'
mailbox, and have a look through that mailbox next time I'm working on
improvements to mdadm).
NeilBrown
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-03-25 5:36 ` Neil Brown
@ 2008-03-25 8:00 ` Anton Altaparmakov
2008-04-28 14:25 ` Lars Täuber
1 sibling, 0 replies; 13+ messages in thread
From: Anton Altaparmakov @ 2008-03-25 8:00 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
Hi Neil,
Thanks for your reply!
On 25 Mar 2008, at 05:36, Neil Brown wrote:
> On Thursday March 13, aia21@cam.ac.uk wrote:
>>
>> Is there a better way to do this? I am hoping someone will tell me
>> to
>> use option blah to utility foo that will do this for me without
>> having
>> to break the mirror twice and resync each time. (-;
>
> Sorry, but no. This mode of operation was never envisaged for md.
> I would always put the md/raid1 devices below the LVM.
We do that already. (-: And then we put MD on top again. What we
have is two computers, each with six 500GB disks, set up as MD RAID-10
+ 1 hot spare on the six disks. Then run LVM on the MD RAID-10 and
create two virtual drives on the LVM on each box. Then on each box we
have one of the LVM drives feed into an MD RAID-1 mirror and the other
LVM drive is exported via iSCSI (iscsitarget) to the other box. The
iSCSI drive is then imported by the other box using open-iscsi and the
resulting device is fed into the other half of the MD RAID-1 mirror on
that box.
So yes, I am sure you never envisaged this mode of operation! But it
is a way to give us synchronous data replication across two sites
(there are about 3 miles between the two boxes) with free software
which in combination with my custom scripting and heartbeat2 allows us
to fail the storage (the MD RAID-1 are xfs formatted and the NFS
exported) over from one site to the other at the flip of a switch and
in case of one machine blowing up, etc.
> Every time you buy two drives, combine them into a RAID1, and add the
> /dev/mdX as a PV for LVM. Then grow you LVM devices whenever you
> like.
>
>> If not, please consider this a feature request for mdadm. (-: It
>> should have an option to detect that the underlying device has grown
>> and thus write a new superblock (or move the old one or whatever) at
>> the end of the newly grown device instead of complaining that it does
>> not exist. Or something! As it is, it is incredibly time consuming
>> and inefficient. )-:
>
> I'll keep it in mind (Which it to say: I will save this in my 'mdadm'
> mailbox, and have a look through that mailbox next time I'm working on
> improvements to mdadm).
Great, thanks!
Best regards,
Anton
--
Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @)
Unix Support, Computing Service, University of Cambridge, CB2 3QH, UK
Linux NTFS maintainer, http://www.linux-ntfs.org/
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-03-25 5:36 ` Neil Brown
2008-03-25 8:00 ` Anton Altaparmakov
@ 2008-04-28 14:25 ` Lars Täuber
2008-04-28 15:09 ` David Lethe
2008-05-02 3:14 ` Neil Brown
1 sibling, 2 replies; 13+ messages in thread
From: Lars Täuber @ 2008-04-28 14:25 UTC (permalink / raw)
To: Neil Brown; +Cc: Anton Altaparmakov, linux-raid
Hallo Neil,
Neil Brown <neilb@suse.de> schrieb:
> On Thursday March 13, aia21@cam.ac.uk wrote:
> >
> > Is there a better way to do this? I am hoping someone will tell me to
> > use option blah to utility foo that will do this for me without having
> > to break the mirror twice and resync each time. (-;
>
> Sorry, but no. This mode of operation was never envisaged for md.
> I would always put the md/raid1 devices below the LVM.
could you write in some short words what in the design prohibits us to grow a raid1 on a grown lvm?
We wanted to do the same:
mutliple RAID1 (aoe targets)
on top of
multiple LVMs
on top of
gigantic RAID6
Thanks
Lars
--
Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstrasse 22-23 10117 Berlin
Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread* RE: How to grow RAID1 mirror on top of LVM?
2008-04-28 14:25 ` Lars Täuber
@ 2008-04-28 15:09 ` David Lethe
2008-04-29 7:55 ` Lars Täuber
2008-05-02 3:14 ` Neil Brown
1 sibling, 1 reply; 13+ messages in thread
From: David Lethe @ 2008-04-28 15:09 UTC (permalink / raw)
To: Lars Täuber, Neil Brown; +Cc: Anton Altaparmakov, linux-raid
-----Original Message-----
From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Lars Täuber
Sent: Monday, April 28, 2008 9:26 AM
To: Neil Brown
Cc: Anton Altaparmakov; linux-raid@vger.kernel.org
Subject: Re: How to grow RAID1 mirror on top of LVM?
Hallo Neil,
Neil Brown <neilb@suse.de> schrieb:
> On Thursday March 13, aia21@cam.ac.uk wrote:
> >
> > Is there a better way to do this? I am hoping someone will tell me to
> > use option blah to utility foo that will do this for me without having
> > to break the mirror twice and resync each time. (-;
>
> Sorry, but no. This mode of operation was never envisaged for md.
> I would always put the md/raid1 devices below the LVM.
could you write in some short words what in the design prohibits us to grow a raid1 on a grown lvm?
We wanted to do the same:
mutliple RAID1 (aoe targets)
on top of
multiple LVMs
on top of
gigantic RAID6
Thanks
Lars
-----------------
Lars:
Even if you *could* do this, then your design would still be an awful idea. You'd effectively have a random I/O filesystem. Any write on a RAID1 disk would generate writes on all of the other disks. Even with a small number of RAID1 targets and default journaling, I wouldn't be surprised if a single application write translated into at least a dozen physical disk writes.
What if (when) both the RAID1 & RAID6 became corrupted? How would you proceed? How would you propose the md engine deal with this? What if you lose a disk on the RAID6 while expanding the LVM and have a bad sector on remaining disks, or fsck shows filesytem corruption while you are expanding a md volume.
Maybe you are starting to get the point. What is preventing the md layer from doing what you want is that a heck of a lot of code would have to be written, and the fileystem and md layers would have to be tightly coupled into a single entity. This would have to be done not so much to facilitate what you want ... but to maintain and safely recover data integrity when things go bad. The layered architecture of LINUX has benefits from developers perspective, because they don't have to worry about such things. The downside is that if you want a robust storage pool that gives you such flexibility then you need to use external RAID or go with another O/S.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-04-28 15:09 ` David Lethe
@ 2008-04-29 7:55 ` Lars Täuber
2008-04-29 13:21 ` David Lethe
0 siblings, 1 reply; 13+ messages in thread
From: Lars Täuber @ 2008-04-29 7:55 UTC (permalink / raw)
To: David Lethe; +Cc: Neil Brown, Anton Altaparmakov, linux-raid
Hallo David!
"David Lethe" <david@santools.com> schrieb:
> Lars:
> Even if you *could* do this, then your design would still be an awful idea. You'd effectively have a random I/O filesystem. Any write on a RAID1 disk would generate writes on all of the other disks. Even with a small number of RAID1 targets and default journaling, I wouldn't be surprised if a single application write translated into at least a dozen physical disk writes.
>
> What if (when) both the RAID1 & RAID6 became corrupted? How would you proceed? How would you propose the md engine deal with this? What if you lose a disk on the RAID6 while expanding the LVM and have a bad sector on remaining disks, or fsck shows filesytem corruption while you are expanding a md volume.
>
> Maybe you are starting to get the point.
No, sorry. I really don't understand. But maybe because I didn't tell detailed enough what our systems looks like.
We have here two huge RAID6 systems, that are connected through a 10G Ethernet switch. They are equally in size.
On these RAID6 we create LVs euqally in size and want them to be parts of a RAID1:
RAID6 RAID6
monosan duosan
LV LV (multiple of)
\ / 10GE
RAID1
| 10GE
aoe
| 1GE
server (multiple of)
> What is preventing the md layer from doing what you want is that a heck of a lot of code would have to be written,
But could you tell me what in the current »design« prevents us to resize the LVs and the RAID1 on top?
When I understand it correctly the RAID information are written at the end of each accessible partition/LV.
What is different on an extended LV from a partition that is replaced with a bigger one?
[...]
Thanks
Lars
--
Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstrasse 22-23 10117 Berlin
Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread* RE: How to grow RAID1 mirror on top of LVM?
2008-04-29 7:55 ` Lars Täuber
@ 2008-04-29 13:21 ` David Lethe
0 siblings, 0 replies; 13+ messages in thread
From: David Lethe @ 2008-04-29 13:21 UTC (permalink / raw)
To: Lars Täuber; +Cc: Neil Brown, Anton Altaparmakov, linux-raid
-----Original Message-----
From: Lars Täuber [mailto:taeuber@bbaw.de]
Sent: Tuesday, April 29, 2008 2:56 AM
To: David Lethe
Cc: Neil Brown; Anton Altaparmakov; linux-raid@vger.kernel.org
Subject: Re: How to grow RAID1 mirror on top of LVM?
Hallo David!
"David Lethe" <david@santools.com> schrieb:
> Lars:
> Even if you *could* do this, then your design would still be an awful idea. You'd effectively have a random I/O filesystem. Any write on a RAID1 disk would generate writes on all of the other disks. Even with a small number of RAID1 targets and default journaling, I wouldn't be surprised if a single application write translated into at least a dozen physical disk writes.
>
> What if (when) both the RAID1 & RAID6 became corrupted? How would you proceed? How would you propose the md engine deal with this? What if you lose a disk on the RAID6 while expanding the LVM and have a bad sector on remaining disks, or fsck shows filesytem corruption while you are expanding a md volume.
>
> Maybe you are starting to get the point.
No, sorry. I really don't understand. But maybe because I didn't tell detailed enough what our systems looks like.
We have here two huge RAID6 systems, that are connected through a 10G Ethernet switch. They are equally in size.
On these RAID6 we create LVs euqally in size and want them to be parts of a RAID1:
RAID6 RAID6
monosan duosan
LV LV (multiple of)
\ / 10GE
RAID1
| 10GE
aoe
| 1GE
server (multiple of)
> What is preventing the md layer from doing what you want is that a heck of a lot of code would have to be written,
But could you tell me what in the current »design« prevents us to resize the LVs and the RAID1 on top?
When I understand it correctly the RAID information are written at the end of each accessible partition/LV.
What is different on an extended LV from a partition that is replaced with a bigger one?
[...]
Thanks
Lars
==================================
The drawing was helpful, as I interpreted your architecture differently. I thought you had something like
RAID6
/ \
/ \
RAID1 RAID1
Still, you could get into a great deal of inefficiency. What are the block/chunk sizes in RAID6, RAID1, and your file system, and how is the journaling set up?
How many disks are in the RAID1, and what is the average I/O size that your application generates.
Lets' say you do a 4KB WRITE in the file system. Factor in all of the above and see how many I/Os get generated on each disk. If you are write-intensive, then consider how much less I/O will be generated if you reduced the sizes of the 2 RAID6 by 2 disks each, and added a RAID1 on each system, and used the RAID1 for the journal. The Journal is write-intensive, and RAID1 is best architecture for a journal. Also you can set correct and matching block sizes for the Journaling. RAID6 has a huge penalty on writes, so the more writes you can move from the RAID6 md, the better.
As for the question about resizing, I was under impression you built it differently, so forget that response. There is nothing in your architecture that will prevent you from resizing.
David
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-04-28 14:25 ` Lars Täuber
2008-04-28 15:09 ` David Lethe
@ 2008-05-02 3:14 ` Neil Brown
2008-05-02 7:23 ` Lars Täuber
2008-05-02 15:06 ` Russ Hammer
1 sibling, 2 replies; 13+ messages in thread
From: Neil Brown @ 2008-05-02 3:14 UTC (permalink / raw)
To: Lars Täuber; +Cc: Anton Altaparmakov, linux-raid
On Monday April 28, taeuber@bbaw.de wrote:
> Hallo Neil,
>
> Neil Brown <neilb@suse.de> schrieb:
> > On Thursday March 13, aia21@cam.ac.uk wrote:
> > >
> > > Is there a better way to do this? I am hoping someone will tell me to
> > > use option blah to utility foo that will do this for me without having
> > > to break the mirror twice and resync each time. (-;
> >
> > Sorry, but no. This mode of operation was never envisaged for md.
> > I would always put the md/raid1 devices below the LVM.
>
> could you write in some short words what in the design prohibits us to grow a raid1 on a grown lvm?
By default, the metadata for an md array is stored near the end of
each device. If you make the device larger, you lose the metadata.
This could be address for on-line resizing by having some protocol
whereby the LVM layer tells whoever is using it that it is about to
become larger, so that the metadata can be updated and moved, but that
is probably more hassle than it is worth.
If you use version 1.1 or 1.2 metadata, the metadata is stored at the
start of the device, so it doesn't get lost. However the metadata has
recorded in it the amount of usable space on the device. When you
make the device bigger you would need to update this number.
There is currently no way to update this for an active array.
You can stop the array, and the re-assemble it with
--update=devicesize
this will update the field in the metadata which records the size of
each device. You will then be able to grow the array to make use of
all the space.
It might not be to hard to make it possible to tell md that devices
have grown.... maybe one day :-)
NeilBrown
> We wanted to do the same:
>
>
> mutliple RAID1 (aoe targets)
> on top of
> multiple LVMs
> on top of
> gigantic RAID6
>
>
> Thanks
> Lars
> --
> Informationstechnologie
> Berlin-Brandenburgische Akademie der Wissenschaften
> Jägerstrasse 22-23 10117 Berlin
> Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-05-02 3:14 ` Neil Brown
@ 2008-05-02 7:23 ` Lars Täuber
2008-05-02 15:06 ` Russ Hammer
1 sibling, 0 replies; 13+ messages in thread
From: Lars Täuber @ 2008-05-02 7:23 UTC (permalink / raw)
To: Neil Brown; +Cc: Anton Altaparmakov, linux-raid
Hallo Neil,
Neil Brown <neilb@suse.de> schrieb:
> If you use version 1.1 or 1.2 metadata, the metadata is stored at the
> start of the device, so it doesn't get lost. However the metadata has
> recorded in it the amount of usable space on the device. When you
> make the device bigger you would need to update this number.
> There is currently no way to update this for an active array.
>
> You can stop the array, and the re-assemble it with
> --update=devicesize
>
> this will update the field in the metadata which records the size of
> each device. You will then be able to grow the array to make use of
> all the space.
this is just all I needed to know. Stopping the array for this short time is not an issue.
Many thanks!
Lars
--
Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstrasse 22-23 10117 Berlin
Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: How to grow RAID1 mirror on top of LVM?
2008-05-02 3:14 ` Neil Brown
2008-05-02 7:23 ` Lars Täuber
@ 2008-05-02 15:06 ` Russ Hammer
2008-05-04 11:20 ` Neil Brown
1 sibling, 1 reply; 13+ messages in thread
From: Russ Hammer @ 2008-05-02 15:06 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
On Fri, 2008-05-02 at 13:14 +1000, Neil Brown wrote:
> On Monday April 28, taeuber@bbaw.de wrote:
> > Hallo Neil,
> >
> > Neil Brown <neilb@suse.de> schrieb:
> > > On Thursday March 13, aia21@cam.ac.uk wrote:
> > > >
> > > > Is there a better way to do this? I am hoping someone will tell me to
> > > > use option blah to utility foo that will do this for me without having
> > > > to break the mirror twice and resync each time. (-;
> > >
> > > Sorry, but no. This mode of operation was never envisaged for md.
> > > I would always put the md/raid1 devices below the LVM.
> >
> > could you write in some short words what in the design prohibits us to grow a raid1 on a grown lvm?
>
> By default, the metadata for an md array is stored near the end of
> each device. If you make the device larger, you lose the metadata.
> This could be address for on-line resizing by having some protocol
> whereby the LVM layer tells whoever is using it that it is about to
> become larger, so that the metadata can be updated and moved, but that
> is probably more hassle than it is worth.
>
> If you use version 1.1 or 1.2 metadata, the metadata is stored at the
> start of the device, so it doesn't get lost. However the metadata has
> recorded in it the amount of usable space on the device. When you
> make the device bigger you would need to update this number.
> There is currently no way to update this for an active array.
>
> You can stop the array, and the re-assemble it with
> --update=devicesize
>
> this will update the field in the metadata which records the size of
> each device. You will then be able to grow the array to make use of
> all the space.
>
> It might not be to hard to make it possible to tell md that devices
> have grown.... maybe one day :-)
>
> NeilBrown
I'm concerned, I'm currently planning on swapping two 250GB drives with
750GB drives on a RAID1 array using 2.6.9 RHEL 4.6 (mdadm mdadm-1.12.0).
My plan basically was:
# remove one small disk
/sbin/mdadm /dev/md0 --fail /dev/sdb1
/sbin/mdadm /dev/md0 --remove /dev/sdb1
# shutdown and swap in large disk
# (with larger partition for the RAID1 component)
# add large drive into array
/sbin/mdadm /dev/md0 --add /dev/sdb1
# Allow the array to resync
# remove the remaining small drive
/sbin/mdadm /dev/md0 --fail /dev/sda1
/sbin/mdadm /dev/md0 --remove /dev/sda1
# Grow the array
/sbin/mdadm -G /dev/md0 -z max
# shutdown and swap in second large disk
# (with larger partition for the RAID1 component)
# add in the second large drive
/sbin/mdadm /dev/md0 --add /dev/sda1
My concern (based on this discussion) is that this will fail because I
am changing the size of the partition underlying the RAID1 array, much
like the LVM discussion above, while using ver 0.90 superblock.
Do I have a legitimate concern??
Thanks,
Russ Hammer
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-05-02 15:06 ` Russ Hammer
@ 2008-05-04 11:20 ` Neil Brown
2008-05-05 7:10 ` Lars Täuber
2008-05-06 12:34 ` Russ Hammer
0 siblings, 2 replies; 13+ messages in thread
From: Neil Brown @ 2008-05-04 11:20 UTC (permalink / raw)
To: Russ Hammer; +Cc: linux-raid
On Friday May 2, russ@perneus.com wrote:
>
> My plan basically was:
>
> # remove one small disk
> /sbin/mdadm /dev/md0 --fail /dev/sdb1
> /sbin/mdadm /dev/md0 --remove /dev/sdb1
>
> # shutdown and swap in large disk
> # (with larger partition for the RAID1 component)
> # add large drive into array
> /sbin/mdadm /dev/md0 --add /dev/sdb1
>
> # Allow the array to resync
> # remove the remaining small drive
> /sbin/mdadm /dev/md0 --fail /dev/sda1
> /sbin/mdadm /dev/md0 --remove /dev/sda1
>
> # Grow the array
> /sbin/mdadm -G /dev/md0 -z max
>
> # shutdown and swap in second large disk
> # (with larger partition for the RAID1 component)
> # add in the second large drive
> /sbin/mdadm /dev/md0 --add /dev/sda1
>
> My concern (based on this discussion) is that this will fail because I
> am changing the size of the partition underlying the RAID1 array, much
> like the LVM discussion above, while using ver 0.90 superblock.
>
> Do I have a legitimate concern??
Your recipe is perfect. You need not be concerned.
You are *not* changing the size of the partition underlying the RAID1
array. When you change the size of a partition, it is not part of the
array at that time. It has no superblock on it, so there is no room
to be confused.
If you were to make a small partition on the new sdb, add that to the
array, then change the partition table to make sdb1 bigger, that would
cause confusion.
However as you are making sdb1 nice and be before including it in the
array, everything should work perfectly.
The only possible problem area in your recipe is if the drives are not
actually identical (not all 750GB drive have the same number of
blocks), the array you get with the -G command might be too big to
allow the partition you create on the new sda. However if the drives
do have exactly the same number of sectors, this will not be a problem.
NeilBrown
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: How to grow RAID1 mirror on top of LVM?
2008-05-04 11:20 ` Neil Brown
@ 2008-05-05 7:10 ` Lars Täuber
2008-05-06 12:34 ` Russ Hammer
1 sibling, 0 replies; 13+ messages in thread
From: Lars Täuber @ 2008-05-05 7:10 UTC (permalink / raw)
To: Neil Brown; +Cc: Russ Hammer, linux-raid
Hi there!
Neil Brown <neilb@suse.de> schrieb:
> The only possible problem area in your recipe is if the drives are not
> actually identical (not all 750GB drive have the same number of
> blocks), the array you get with the -G command might be too big to
> allow the partition you create on the new sda. However if the drives
> do have exactly the same number of sectors, this will not be a problem.
I think this problem is not true if you grow the array after including the second bigger disk. Because then all involved partition and their sizes are known.
Am I right?
Greetings
Lars
--
Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstrasse 22-23 10117 Berlin
Tel.: +49 30 20370-352 http://www.bbaw.de
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 13+ messages in thread* Re: How to grow RAID1 mirror on top of LVM?
2008-05-04 11:20 ` Neil Brown
2008-05-05 7:10 ` Lars Täuber
@ 2008-05-06 12:34 ` Russ Hammer
1 sibling, 0 replies; 13+ messages in thread
From: Russ Hammer @ 2008-05-06 12:34 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
On Sun, 2008-05-04 at 21:20 +1000, Neil Brown wrote:
> On Friday May 2, russ@perneus.com wrote:
> >
> > My plan basically was:
> >
> > # remove one small disk
> > /sbin/mdadm /dev/md0 --fail /dev/sdb1
> > /sbin/mdadm /dev/md0 --remove /dev/sdb1
> >
> > # shutdown and swap in large disk
> > # (with larger partition for the RAID1 component)
> > # add large drive into array
> > /sbin/mdadm /dev/md0 --add /dev/sdb1
> >
> > # Allow the array to resync
> > # remove the remaining small drive
> > /sbin/mdadm /dev/md0 --fail /dev/sda1
> > /sbin/mdadm /dev/md0 --remove /dev/sda1
> >
> > # Grow the array
> > /sbin/mdadm -G /dev/md0 -z max
> >
> > # shutdown and swap in second large disk
> > # (with larger partition for the RAID1 component)
> > # add in the second large drive
> > /sbin/mdadm /dev/md0 --add /dev/sda1
> >
> > My concern (based on this discussion) is that this will fail because I
> > am changing the size of the partition underlying the RAID1 array, much
> > like the LVM discussion above, while using ver 0.90 superblock.
> >
> > Do I have a legitimate concern??
>
> Your recipe is perfect. You need not be concerned.
>
> You are *not* changing the size of the partition underlying the RAID1
> array. When you change the size of a partition, it is not part of the
> array at that time. It has no superblock on it, so there is no room
> to be confused.
>
> If you were to make a small partition on the new sdb, add that to the
> array, then change the partition table to make sdb1 bigger, that would
> cause confusion.
> However as you are making sdb1 nice and be before including it in the
> array, everything should work perfectly.
>
> The only possible problem area in your recipe is if the drives are not
> actually identical (not all 750GB drive have the same number of
> blocks), the array you get with the -G command might be too big to
> allow the partition you create on the new sda. However if the drives
> do have exactly the same number of sectors, this will not be a problem.
>
> NeilBrown
Thanks, I see the distinction now.
Russ
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2008-05-06 12:34 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-13 11:21 How to grow RAID1 mirror on top of LVM? Anton Altaparmakov
2008-03-25 5:36 ` Neil Brown
2008-03-25 8:00 ` Anton Altaparmakov
2008-04-28 14:25 ` Lars Täuber
2008-04-28 15:09 ` David Lethe
2008-04-29 7:55 ` Lars Täuber
2008-04-29 13:21 ` David Lethe
2008-05-02 3:14 ` Neil Brown
2008-05-02 7:23 ` Lars Täuber
2008-05-02 15:06 ` Russ Hammer
2008-05-04 11:20 ` Neil Brown
2008-05-05 7:10 ` Lars Täuber
2008-05-06 12:34 ` Russ Hammer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).