linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* migrating raid-1 to different drive geometry ?
@ 2005-01-24 14:59 rfu
  2005-01-24 22:47 ` Neil Brown
  0 siblings, 1 reply; 7+ messages in thread
From: rfu @ 2005-01-24 14:59 UTC (permalink / raw)
  To: linux-raid

Hi,

I'm running Fedora Core 1 on a server with 2 identical
60 GB ATA drives. These drives are organized in a RAID1
with 4 partitions.

I now want to replace both drives with two different
(but also identical) ATA drives with about 80 to 120 GB
capacity each.

how can the existing raid setup be moved to the new disks 
without data loss ?

I guess it must be something like this:

1) physically remove first old drive
2) physically add first new drive
3) re-create partitions on new drive
4) run raidhotadd for each partition
5) wait until all partitions synced
6) repeat with second drive

the big question is: since the drive geometry will definitely different
between old 60GB and new 80GB drive(s), how do the new partitions 
have to be created on the new drive ?
- do they have to have exactly the same amount of blocks ?
- may they be bigger ?

are there better strategies for such a migration ?

tnx in advance.

rainer. 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-24 14:59 migrating raid-1 to different drive geometry ? rfu
@ 2005-01-24 22:47 ` Neil Brown
  2005-01-25  0:35   ` Robin Bowes
  0 siblings, 1 reply; 7+ messages in thread
From: Neil Brown @ 2005-01-24 22:47 UTC (permalink / raw)
  To: rfu; +Cc: linux-raid


On Monday January 24, rfu@kaneda.iguw.tuwien.ac.at wrote:
> how can the existing raid setup be moved to the new disks 
> without data loss ?
> 
> I guess it must be something like this:
> 
> 1) physically remove first old drive
> 2) physically add first new drive
> 3) re-create partitions on new drive
> 4) run raidhotadd for each partition
> 5) wait until all partitions synced
> 6) repeat with second drive

Sounds good.
> 
> the big question is: since the drive geometry will definitely different
> between old 60GB and new 80GB drive(s), how do the new partitions 
> have to be created on the new drive ?
> - do they have to have exactly the same amount of blocks ?
No.
> - may they be bigger ?
Yes (they cannot be smaller).

However making the partitions bigger will not make the arrays bigger.

If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
array with
   mdadm --grow /dev/mdX --size=max

You will then need to convince the filesystem in the array to make use
of the extra space.  Many filesystems do support such growth.  Some
even support on-line growth.

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-24 22:47 ` Neil Brown
@ 2005-01-25  0:35   ` Robin Bowes
  2005-01-25  0:53     ` Neil Brown
  2005-01-25  0:54     ` Mike Hardy
  0 siblings, 2 replies; 7+ messages in thread
From: Robin Bowes @ 2005-01-25  0:35 UTC (permalink / raw)
  To: linux-raid

Neil Brown wrote:
> If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
> array with
>    mdadm --grow /dev/mdX --size=max

Neil,

Is this just for RAID1? OR will it work for RAID5 too?

R.
-- 
http://robinbowes.com


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-25  0:35   ` Robin Bowes
@ 2005-01-25  0:53     ` Neil Brown
  2005-01-25  0:54     ` Mike Hardy
  1 sibling, 0 replies; 7+ messages in thread
From: Neil Brown @ 2005-01-25  0:53 UTC (permalink / raw)
  To: Robin Bowes; +Cc: linux-raid

On Tuesday January 25, robin-lists@robinbowes.com wrote:
> Neil Brown wrote:
> > If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
> > array with
> >    mdadm --grow /dev/mdX --size=max
> 
> Neil,
> 
> Is this just for RAID1? OR will it work for RAID5 too?

 --grow --size=max

should work for raid 1,5,6.

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-25  0:35   ` Robin Bowes
  2005-01-25  0:53     ` Neil Brown
@ 2005-01-25  0:54     ` Mike Hardy
  2005-01-25  8:22       ` Robin Bowes
  1 sibling, 1 reply; 7+ messages in thread
From: Mike Hardy @ 2005-01-25  0:54 UTC (permalink / raw)
  To: linux-raid


I'd missed the mdadm --grow feature as well, so I checked into it.

It is only capable of increasing size on raid5, not component count. The 
specific use case used as an example is that you slowly retire component 
drives and the replacements are larger. When all components are the 
larger size, you can grow the raid5 array to use the full size of the 
device, followed by a filesystem expansion to use the grown array.

That makes sense, given the disk layout of raid5 - its not hard to add 
more stripes on the end of components, but adding new components 
requires each stripe to change significantly.

To grow component count on raid5 you have to use raidreconf, which can 
work, but will toast the array if anything goes bad. I have personally 
had it work, and not work, in different instances. The failures were not 
necessarily raidreconf's fault, but its not fault tolerant is the point, 
as it starts at the first stripe, laying things out the new way, and if 
it doesn't finish, and finish correctly, you are in an irretrievable 
inconsistent state.

raid1 can grow components with mdadm --grow though.

Cool trick

-Mike

Robin Bowes wrote:
> Neil Brown wrote:
> 
>> If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the
>> array with
>>    mdadm --grow /dev/mdX --size=max
> 
> 
> Neil,
> 
> Is this just for RAID1? OR will it work for RAID5 too?
> 
> R.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-25  0:54     ` Mike Hardy
@ 2005-01-25  8:22       ` Robin Bowes
  2005-01-25 18:13         ` Mike Hardy
  0 siblings, 1 reply; 7+ messages in thread
From: Robin Bowes @ 2005-01-25  8:22 UTC (permalink / raw)
  To: linux-raid

Mike Hardy wrote:
> 
> I'd missed the mdadm --grow feature as well, so I checked into it.
> 
> It is only capable of increasing size on raid5, not component count. The 
> specific use case used as an example is that you slowly retire component 
> drives and the replacements are larger. When all components are the 
> larger size, you can grow the raid5 array to use the full size of the 
> device, followed by a filesystem expansion to use the grown array.
> 
> That makes sense, given the disk layout of raid5 - its not hard to add 
> more stripes on the end of components, but adding new components 
> requires each stripe to change significantly.
> 
> To grow component count on raid5 you have to use raidreconf, which can 
> work, but will toast the array if anything goes bad. I have personally 
> had it work, and not work, in different instances. The failures were not 
> necessarily raidreconf's fault, but its not fault tolerant is the point, 
> as it starts at the first stripe, laying things out the new way, and if 
> it doesn't finish, and finish correctly, you are in an irretrievable 
> inconsistent state.
> 

Bah, too bad.

I don't need it yet, but at some stage I'd like to be able to add 
another 250GB drive(s) to me array and grow the array to use the 
additional space in a safe/failsafe way.

Perhaps by the time I come to need it this might be possible?

R.
-- 
http://robinbowes.com


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: migrating raid-1 to different drive geometry ?
  2005-01-25  8:22       ` Robin Bowes
@ 2005-01-25 18:13         ` Mike Hardy
  0 siblings, 0 replies; 7+ messages in thread
From: Mike Hardy @ 2005-01-25 18:13 UTC (permalink / raw)
  To: Robin Bowes; +Cc: linux-raid



Robin Bowes wrote:
> Mike Hardy wrote:
>> To grow component count on raid5 you have to use raidreconf, which can 
>> work, but will toast the array if anything goes bad. I have personally 
>> had it work, and not work, in different instances. The failures were 
>> not necessarily raidreconf's fault, but its not fault tolerant is the 
>> point, as it starts at the first stripe, laying things out the new 
>> way, and if it doesn't finish, and finish correctly, you are in an 
>> irretrievable inconsistent state.
>>
> 
> Bah, too bad.
> 
> I don't need it yet, but at some stage I'd like to be able to add 
> another 250GB drive(s) to me array and grow the array to use the 
> additional space in a safe/failsafe way.
> 
> Perhaps by the time I come to need it this might be possible?

Well, I want to be clear here, as who ever wrote raidreconf deserves 
some respect, and I don't want to appear to be disparaging it.

raidreconf works. I'm not aware of any bugs in it.

Further, if mdadm was to implement the feature of adding components to a 
raid5 array, I'm guessing it would look exactly the same as raidreconf, 
simply because of the work it has to do (re-configuring each stripe, 
moving parity blocks and data blocks around, etc). Its just the way the 
raid5 disk layout is.

So, since raidreconf does work, its definitely possible now, but you 
have to make absolutely amazingly sure of three things:

1) the component size you add is at least as large as the rest of the 
components (it'll barf at the end if not)
2) the old and new configurations you feed raidreconf are perfect (or 
what happens is undefined)
3) you have absolutely no bad blocks on any component, as it will read 
each block on each component and write each block on each component. 
(that's a tall order these days, if you get a bad block, what can it do?)

If any of those things go bad, your array goes bad, but its not the 
algorithm's fault, as far as I can tell. Its constrained by the 
problem's requirements. So I'd add:

4) you have a perfect, fresh backup of the array ;-)

Honestly, I've done it, and it does work, its just touchy. You can 
practice with it with loop devices (check for a raid5 loop array creator 
and destructor script I posted a week or so back) if you want to see it.

-Mike

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2005-01-25 18:13 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-01-24 14:59 migrating raid-1 to different drive geometry ? rfu
2005-01-24 22:47 ` Neil Brown
2005-01-25  0:35   ` Robin Bowes
2005-01-25  0:53     ` Neil Brown
2005-01-25  0:54     ` Mike Hardy
2005-01-25  8:22       ` Robin Bowes
2005-01-25 18:13         ` Mike Hardy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).