linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID 5 Grow
@ 2005-07-15 19:50 Forrest Taylor
  2005-07-15 20:16 ` Simon Valiquette
  2005-07-16  4:54 ` Neil Brown
  0 siblings, 2 replies; 7+ messages in thread
From: Forrest Taylor @ 2005-07-15 19:50 UTC (permalink / raw)
  To: Linux RAID

I am doing some RAID scenarios on a single disc (testing purposes) on
RHEL4.  I have some partitions as follows:

/dev/hda5  100M
/dev/hda6  200M
/dev/hda7  200M
/dev/hda8  200M
/dev/hda9  200M

I create a RAID 5 set with /dev/hda{5,6,7,8}.  I fail/remove /dev/hda5
and add /dev/hda9, at which point I can grow the RAID.  Running:

mdadm -G /dev/md0 -z max

will increase the RAID size, however, it sets off a infinite resync.  I
have tested with mdadm-1.6.0-2, and with mdadm-1.12.0-1 rebuilt from the
source rpm.

Here is a snippet of the messages:

Jul 13 09:33:36 station8 kernel: md: syncing RAID array md0
Jul 13 09:33:36 station8 kernel: md: minimum _guaranteed_ reconstruction
speed: 1000 KB/sec/disc.
Jul 13 09:33:36 station8 kernel: md: using maximum available idle IO
bandwith (but not more than 200000 KB/sec) for reconstruction.
Jul 13 09:33:36 station8 kernel: md: using 128k window, over a total of
104320 blocks.
Jul 13 09:33:36 station8 kernel: md: resuming recovery of md0 from
checkpoint.
Jul 13 09:33:36 station8 kernel: md: md0: sync done.
Jul 13 09:33:36 station8 kernel: RAID5 conf printout:
Jul 13 09:33:36 station8 kernel:  --- rd:4 wd:4 fd:0
Jul 13 09:33:36 station8 kernel:  disk 0, o:1, dev:hda6
Jul 13 09:33:36 station8 kernel:  disk 1, o:1, dev:hda7
Jul 13 09:33:36 station8 kernel:  disk 2, o:1, dev:hda8
Jul 13 09:33:36 station8 kernel:  disk 3, o:1, dev:hda9

which print out many times per second.  I have to reboot the system to
get it to stop filling /var/log/messages.  Once it reboots, and syncs, I
don't get any problems with it.  Any idea where the problem might be?

Thanks,

Forrest


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID 5 Grow
  2005-07-15 19:50 Forrest Taylor
@ 2005-07-15 20:16 ` Simon Valiquette
  2005-07-15 20:23   ` Forrest Taylor
  2005-07-15 20:39   ` Forrest Taylor
  2005-07-16  4:54 ` Neil Brown
  1 sibling, 2 replies; 7+ messages in thread
From: Simon Valiquette @ 2005-07-15 20:16 UTC (permalink / raw)
  To: linux-raid

Forrest Taylor a écrit :
> I am doing some RAID scenarios on a single disc (testing purposes) on
> RHEL4.  I have some partitions as follows:
> 
> /dev/hda5  100M
> /dev/hda6  200M
> /dev/hda7  200M
> /dev/hda8  200M
> /dev/hda9  200M
> 
> I create a RAID 5 set with /dev/hda{5,6,7,8}.  I fail/remove /dev/hda5
> and add /dev/hda9, at which point I can grow the RAID.  Running:
> 
> mdadm -G /dev/md0 -z max
> 

   Have you copied the data from hda5 to hda9 with dd?  If i remember, 
growing a RAID 5 that way is done by creating the missing stripes at 
the end of each disks.  If the array is already in degraded mode, I am 
not sure if mdadm is able to recover from that.

Simon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID 5 Grow
  2005-07-15 20:16 ` Simon Valiquette
@ 2005-07-15 20:23   ` Forrest Taylor
  2005-07-15 20:39   ` Forrest Taylor
  1 sibling, 0 replies; 7+ messages in thread
From: Forrest Taylor @ 2005-07-15 20:23 UTC (permalink / raw)
  To: Simon Valiquette; +Cc: Linux RAID

On Fri, 2005-07-15 at 14:16, Simon Valiquette wrote:
> Forrest Taylor a écrit :
> > I am doing some RAID scenarios on a single disc (testing purposes) on
> > RHEL4.  I have some partitions as follows:
> > 
> > /dev/hda5  100M
> > /dev/hda6  200M
> > /dev/hda7  200M
> > /dev/hda8  200M
> > /dev/hda9  200M
> > 
> > I create a RAID 5 set with /dev/hda{5,6,7,8}.  I fail/remove /dev/hda5
> > and add /dev/hda9, at which point I can grow the RAID.  Running:
> > 
> > mdadm -G /dev/md0 -z max
> > 
> 
>    Have you copied the data from hda5 to hda9 with dd?  If i remember, 
> growing a RAID 5 that way is done by creating the missing stripes at 
> the end of each disks.  If the array is already in degraded mode, I am 
> not sure if mdadm is able to recover from that.

I pulled hda5 and added hda9, then I waited for it to resync.  It was
not in degraded mode when I tried to grow the RAID.  I did not copy any
data from hda5 to hda9.  In fact, I had it mounted, and I checked the
integrity of the data at each step.  I did not lose any data, nor did I
lose the RAID.  I don't think that I tried the grow with /dev/md0
unmounted, so it may be possible that having it mounted caused some
trouble.

Thanks,

Forrest
-- 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID 5 Grow
  2005-07-15 20:16 ` Simon Valiquette
  2005-07-15 20:23   ` Forrest Taylor
@ 2005-07-15 20:39   ` Forrest Taylor
  1 sibling, 0 replies; 7+ messages in thread
From: Forrest Taylor @ 2005-07-15 20:39 UTC (permalink / raw)
  To: Linux RAID

I am not certain that I made myself clear, so let me expound what I was
doing.  I really wanted to grow the RAID after having replaced each of
the RAID devices partitions that were larger.  I actually started with
100M partitions.  I failed one of the smaller partitions and I added in
one of the larger ones.  After all of the smaller partitions had been
replaced by larger partitions, I wanted to grow the RAID so that the
RAID devices would be 200M instead of 100M, thus giving me double the
space.  Hopefully, that will help to clarify things.

Forrest


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID 5 Grow
  2005-07-15 19:50 Forrest Taylor
  2005-07-15 20:16 ` Simon Valiquette
@ 2005-07-16  4:54 ` Neil Brown
  1 sibling, 0 replies; 7+ messages in thread
From: Neil Brown @ 2005-07-16  4:54 UTC (permalink / raw)
  To: Forrest Taylor; +Cc: Linux RAID

On Friday July 15, ftaylor@redhat.com wrote:
> I am doing some RAID scenarios on a single disc (testing purposes) on
> RHEL4.  I have some partitions as follows:
> 
> /dev/hda5  100M
> /dev/hda6  200M
> /dev/hda7  200M
> /dev/hda8  200M
> /dev/hda9  200M
> 
> I create a RAID 5 set with /dev/hda{5,6,7,8}.  I fail/remove /dev/hda5
> and add /dev/hda9, at which point I can grow the RAID.  Running:
> 
> mdadm -G /dev/md0 -z max
> 
> will increase the RAID size, however, it sets off a infinite resync.  I
> have tested with mdadm-1.6.0-2, and with mdadm-1.12.0-1 rebuilt from the
> source rpm.

What kernel were you running?  There were problems with looping
resyncs, but I think they have been fixed.

I just repeated your experiment on 2.6.13-rc1-mm1 and didn't get an
infinite loop, but it took < 1second to sync the second 200M, whereas
the first 100M took 7 seconds, so I'm a bit worried...

Ahh, found the problem.  The following patch is required to make it
resync properly, and with it the experiment does what is expected.
Thanks for helping me find that.

NeilBrown

Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>

### Diffstat output
 ./drivers/md/md.c |    1 +
 1 files changed, 1 insertion(+)

diff ./drivers/md/md.c~current~ ./drivers/md/md.c
--- ./drivers/md/md.c~current~	2005-07-16 14:41:19.000000000 +1000
+++ ./drivers/md/md.c	2005-07-16 14:41:21.000000000 +1000
@@ -2580,6 +2580,7 @@ static int update_array_info(mddev_t *md
 			if (avail < ((sector_t)info->size << 1))
 				return -ENOSPC;
 		}
+		mddev->resync_max_sectors =  (sector_t)info->size *2; /*default */
 		rv = mddev->pers->resize(mddev, (sector_t)info->size *2);
 		if (!rv) {
 			struct block_device *bdev;

^ permalink raw reply	[flat|nested] 7+ messages in thread

* RAID 5 Grow
@ 2007-06-23  4:16 Richard Scobie
  2007-06-25 16:39 ` Bill Davidsen
  0 siblings, 1 reply; 7+ messages in thread
From: Richard Scobie @ 2007-06-23  4:16 UTC (permalink / raw)
  To: Linux RAID Mailing List

I will soon be adding another same sized drive to an existing 3 drive 
RAID 5 array.

The machine is running Fedora Core 6 with kernel 2.6.20-1.2952.fc6 and 
mdadm 2.5.4, both of which are the latest available Fedora packages.

Is anyone aware of any obvious bugs in either of these that will 
jeopardise this resize?

I can compile and install later versions if required, but I'd rather 
leave the system "standard" if I can.

Thanks.

Richard

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID 5 Grow
  2007-06-23  4:16 RAID 5 Grow Richard Scobie
@ 2007-06-25 16:39 ` Bill Davidsen
  0 siblings, 0 replies; 7+ messages in thread
From: Bill Davidsen @ 2007-06-25 16:39 UTC (permalink / raw)
  To: Richard Scobie; +Cc: Linux RAID Mailing List

Richard Scobie wrote:
> I will soon be adding another same sized drive to an existing 3 drive 
> RAID 5 array.
>
> The machine is running Fedora Core 6 with kernel 2.6.20-1.2952.fc6 and 
> mdadm 2.5.4, both of which are the latest available Fedora packages.
>
> Is anyone aware of any obvious bugs in either of these that will 
> jeopardise this resize?
>
> I can compile and install later versions if required, but I'd rather 
> leave the system "standard" if I can.

I did a grow on RAID5 using an earlier FC6 setup, so I don't think you 
will have a problem.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2007-06-25 16:39 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-23  4:16 RAID 5 Grow Richard Scobie
2007-06-25 16:39 ` Bill Davidsen
  -- strict thread matches above, loose matches on Subject: below --
2005-07-15 19:50 Forrest Taylor
2005-07-15 20:16 ` Simon Valiquette
2005-07-15 20:23   ` Forrest Taylor
2005-07-15 20:39   ` Forrest Taylor
2005-07-16  4:54 ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).