linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID5 -> RAID6 conversion, please help
@ 2011-05-10 23:15 Peter Kovari
  2011-05-10 23:31 ` NeilBrown
  0 siblings, 1 reply; 9+ messages in thread
From: Peter Kovari @ 2011-05-10 23:15 UTC (permalink / raw)
  To: linux-raid

Dear all,

I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6 array.
This was my existing array:
----------------------------------------------------------------------------
-----------------------------------
/dev/md0:
  Version : 0.90
  Raid Level : raid5
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 5
 Total Devices : 5
  Persistence : Superblock is persistent
  State : clean
 Active Devices : 5
 Working Devices : 5
  Layout : left-symmetric
  Chunk Size : 512K
  Events : 0.156

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
----------------------------------------------------------------------------
-------------------------------

I did the conversion according to "howtos", so:
$ mdadm -add /dev/md0 /dev/sdd1
then:
$ mdadm --grow /dev/md0 --level=6 --raid-devices=6
--backup-file=/mnt/mdadm-raid5-to-raid6.backup

Instead of starting the reshape process, mdadm responded this:
mdadm: /dev/md0: changed level to 6 (or something like that, i dont remember
the exact words, but it was about changing the level).
mdadm: /dev/md0: Cannot get array details from sysfs

And the array became this:
----------------------------------------------------------------------------
-------------------------------
/dev/md0:
  Raid Level : raid6
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
State : clean, degraded
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Events : 0.170

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
       5       0        0        5      removed
       6       8       49        -      spare   /dev/sdd1
----------------------------------------------------------------------------
-------------------------------

At this point I realized that /dev/sdd previously was a member of another
raid array in an other machine, and however I re-partitioned the disk, I
didn't remove the old superblock. So maybe this was the reason for the mdadm
error. Since the state of /dev/sdd1 was spare, i removed it:

$ mdadm -remove /dev/md0 /dev/sdd1

then cleared remaining superblock
$ mdadm --zero-superblock /dev/sdd1

then added it back to the array:
mdadm --add /dev/md0 /dev/sdd1

and started the grow process again:
$ mdadm --grow /dev/md0 --level=6 --raid-devices=6
--backup-file=/mnt/mdadm-raid5-to-raid6.backup
mdadm: /dev/md0: no change requested

Mdadm stated no change, however, it started to rebuild the array. It's
currently rebuilding:
----------------------------------------------------------------------------
-------------------------------
/dev/md0:
  Version : 0.90
  Raid Level : raid6
  Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
  Raid Devices : 6
 Total Devices : 6
  Persistence : Superblock is persistent
  State : clean, degraded, recovering
 Active Devices : 5
  Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1
  Layout : left-symmetric-6
 Chunk Size : 512K
 Rebuild Status : 2% complete
  Events : 0.186

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       81        1      active sync   /dev/sdf1
       2       8       33        2      active sync   /dev/sdc1
       3       8       97        3      active sync   /dev/sdg1
       4       8       65        4      active sync   /dev/sde1
       6       8       49        5      spare rebuilding   /dev/sdd1
----------------------------------------------------------------------------
-------------------------------
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
      5860548608 blocks level 6, 512k chunk, algorithm 18 [6/5] [UUUUU_]
      [>....................]  recovery =  2.3% (34438272/1465137152)
finish=1074.5min speed=22190K/sec

unused devices: <none>
----------------------------------------------------------------------------
-------------------------------

Mdadm didn't create the backup file, and the process seems too fast to me
for a raid5->raid6 conversion.
Please help me to understand what's happening now.

Cheers,
Peter




--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-10 23:15 RAID5 -> RAID6 conversion, please help Peter Kovari
@ 2011-05-10 23:31 ` NeilBrown
  2011-05-10 23:39   ` Steven Haigh
  2011-05-11  0:08   ` Peter Kovari
  0 siblings, 2 replies; 9+ messages in thread
From: NeilBrown @ 2011-05-10 23:31 UTC (permalink / raw)
  To: Peter Kovari; +Cc: linux-raid

On Wed, 11 May 2011 01:15:11 +0200 "Peter Kovari" <peter@kovari.priv.hu>
wrote:

> Dear all,
> 
> I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6 array.
> This was my existing array:
> ----------------------------------------------------------------------------
> -----------------------------------
> /dev/md0:
>   Version : 0.90
>   Raid Level : raid5
>   Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>   Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
>   Raid Devices : 5
>  Total Devices : 5
>   Persistence : Superblock is persistent
>   State : clean
>  Active Devices : 5
>  Working Devices : 5
>   Layout : left-symmetric
>   Chunk Size : 512K
>   Events : 0.156
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       81        1      active sync   /dev/sdf1
>        2       8       33        2      active sync   /dev/sdc1
>        3       8       97        3      active sync   /dev/sdg1
>        4       8       65        4      active sync   /dev/sde1
> ----------------------------------------------------------------------------
> -------------------------------
> 
> I did the conversion according to "howtos", so:
> $ mdadm -add /dev/md0 /dev/sdd1
> then:
> $ mdadm --grow /dev/md0 --level=6 --raid-devices=6
> --backup-file=/mnt/mdadm-raid5-to-raid6.backup
> 
> Instead of starting the reshape process, mdadm responded this:
> mdadm: /dev/md0: changed level to 6 (or something like that, i dont remember
> the exact words, but it was about changing the level).
> mdadm: /dev/md0: Cannot get array details from sysfs
> 
> And the array became this:
> ----------------------------------------------------------------------------
> -------------------------------
> /dev/md0:
>   Raid Level : raid6
>   Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>   Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
>   Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
> State : clean, degraded
> Active Devices : 5
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 1
> Events : 0.170
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       81        1      active sync   /dev/sdf1
>        2       8       33        2      active sync   /dev/sdc1
>        3       8       97        3      active sync   /dev/sdg1
>        4       8       65        4      active sync   /dev/sde1
>        5       0        0        5      removed
>        6       8       49        -      spare   /dev/sdd1
> ----------------------------------------------------------------------------
> -------------------------------
> 
> At this point I realized that /dev/sdd previously was a member of another
> raid array in an other machine, and however I re-partitioned the disk, I
> didn't remove the old superblock. So maybe this was the reason for the mdadm
> error. Since the state of /dev/sdd1 was spare, i removed it:
> 
> $ mdadm -remove /dev/md0 /dev/sdd1
> 
> then cleared remaining superblock
> $ mdadm --zero-superblock /dev/sdd1
> 
> then added it back to the array:
> mdadm --add /dev/md0 /dev/sdd1
> 
> and started the grow process again:
> $ mdadm --grow /dev/md0 --level=6 --raid-devices=6
> --backup-file=/mnt/mdadm-raid5-to-raid6.backup
> mdadm: /dev/md0: no change requested
> 
> Mdadm stated no change, however, it started to rebuild the array. It's
> currently rebuilding:
> ----------------------------------------------------------------------------
> -------------------------------
> /dev/md0:
>   Version : 0.90
>   Raid Level : raid6
>   Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>   Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
>   Raid Devices : 6
>  Total Devices : 6
>   Persistence : Superblock is persistent
>   State : clean, degraded, recovering
>  Active Devices : 5
>   Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 1
>   Layout : left-symmetric-6
>  Chunk Size : 512K
>  Rebuild Status : 2% complete
>   Events : 0.186
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       17        0      active sync   /dev/sdb1
>        1       8       81        1      active sync   /dev/sdf1
>        2       8       33        2      active sync   /dev/sdc1
>        3       8       97        3      active sync   /dev/sdg1
>        4       8       65        4      active sync   /dev/sde1
>        6       8       49        5      spare rebuilding   /dev/sdd1
> ----------------------------------------------------------------------------
> -------------------------------
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
> [raid10]
> md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
>       5860548608 blocks level 6, 512k chunk, algorithm 18 [6/5] [UUUUU_]
>       [>....................]  recovery =  2.3% (34438272/1465137152)
> finish=1074.5min speed=22190K/sec
> 
> unused devices: <none>
> ----------------------------------------------------------------------------
> -------------------------------
> 
> Mdadm didn't create the backup file, and the process seems too fast to me
> for a raid5->raid6 conversion.
> Please help me to understand what's happening now.

You have a RAID6 array in a non-standard config where there Q block (the
second parity block) is always on the last device rather than rotated around
the various devices.

The array is simply recovering that 6th drive to the spare.

When it finished you will have a perfectly functional RAID6 array with full
redundancy.  It might perform slightly differently to a standard layout -
I've never performed any measurements to see how differently.

If you want to (after the recovery completes) you could convert to a regular
RAID6 with
  mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device

but you probably don't have to.

The old meta on sdd will not have been a problem.

What version of mdadm did you use to try to start the reshape?

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-10 23:31 ` NeilBrown
@ 2011-05-10 23:39   ` Steven Haigh
  2011-05-11  0:21     ` NeilBrown
  2011-05-11  0:08   ` Peter Kovari
  1 sibling, 1 reply; 9+ messages in thread
From: Steven Haigh @ 2011-05-10 23:39 UTC (permalink / raw)
  To: linux-raid

On 11/05/2011 9:31 AM, NeilBrown wrote:
> When it finished you will have a perfectly functional RAID6 array with full
> redundancy.  It might perform slightly differently to a standard layout -
> I've never performed any measurements to see how differently.
>
> If you want to (after the recovery completes) you could convert to a regular
> RAID6 with
>    mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device
>
> but you probably don't have to.
>

This makes me wonder. How can one tell if the layout is 'normal' or with 
Q blocks on a single device?

I recently changed my array from RAID5->6. Mine created a backup file 
and took just under 40 hours for 4 x 1Tb devices. I assume that this 
means that data was reorganised to the standard RAID6 style? The 
conversion was done at about 4-6Mb/sec.

Is there any effect on doing a --layout=normalise if the above happened?

-- 
Steven Haigh

Email: netwiz@crc.id.au
Web: http://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: RAID5 -> RAID6 conversion, please help
  2011-05-10 23:31 ` NeilBrown
  2011-05-10 23:39   ` Steven Haigh
@ 2011-05-11  0:08   ` Peter Kovari
  1 sibling, 0 replies; 9+ messages in thread
From: Peter Kovari @ 2011-05-11  0:08 UTC (permalink / raw)
  To: linux-raid

> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of NeilBrown
> Sent: Wednesday, May 11, 2011 1:32 AM
> To: Peter Kovari
> Cc: linux-raid@vger.kernel.org
> Subject: Re: RAID5 -> RAID6 conversion, please help

> You have a RAID6 array in a non-standard config where there Q block (the
> second parity block) is always on the last device rather than rotated
around
> the various devices.

> The array is simply recovering that 6th drive to the spare.
> When it finished you will have a perfectly functional RAID6 array with
full
> redundancy.  It might perform slightly differently to a standard layout -
> I've never performed any measurements to see how differently.
> If you want to (after the recovery completes) you could convert to a
regular
> RAID6 with
>   mdadm -G /dev/md0 --layout=normalise
--backup=/some/file/on/a/different/device
> but you probably don't have to.

Thank you Neil, this explains everything. 

I suppose the layout difference mostly affects - if affects - write
performance. Since this is a media server, with mostly read operations, I
probably will leave it as it is.

> The old meta on sdd will not have been a problem.
> What version of mdadm did you use to try to start the reshape?

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.2 LTS
Release:        10.04
Codename:       lucid

$ mdadm --version
mdadm - v3.1.4 - 31st August 2010

$ uname -a
Linux FileStation 2.6.34-020634-generic #020634 SMP Mon May 17 19:27:49 UTC
2010 x86_64 GNU/Linux

Cheers,
Peter




^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-10 23:39   ` Steven Haigh
@ 2011-05-11  0:21     ` NeilBrown
  2011-05-11  0:38       ` Dylan Distasio
  0 siblings, 1 reply; 9+ messages in thread
From: NeilBrown @ 2011-05-11  0:21 UTC (permalink / raw)
  To: Steven Haigh; +Cc: linux-raid

On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh <netwiz@crc.id.au> wrote:

> On 11/05/2011 9:31 AM, NeilBrown wrote:
> > When it finished you will have a perfectly functional RAID6 array with full
> > redundancy.  It might perform slightly differently to a standard layout -
> > I've never performed any measurements to see how differently.
> >
> > If you want to (after the recovery completes) you could convert to a regular
> > RAID6 with
> >    mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device
> >
> > but you probably don't have to.
> >
> 
> This makes me wonder. How can one tell if the layout is 'normal' or with 
> Q blocks on a single device?
> 
> I recently changed my array from RAID5->6. Mine created a backup file 
> and took just under 40 hours for 4 x 1Tb devices. I assume that this 
> means that data was reorganised to the standard RAID6 style? The 
> conversion was done at about 4-6Mb/sec.

Probably.

What is the 'layout' reported by "mdadm -D"?
If it ends -6, then it is a RAID5 layout with the Q block all on the last
disk.
If not, then it is already normalised.

> 
> Is there any effect on doing a --layout=normalise if the above happened?
> 
Probably not.

NeilBrown


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-11  0:21     ` NeilBrown
@ 2011-05-11  0:38       ` Dylan Distasio
  2011-05-11  0:47         ` NeilBrown
  0 siblings, 1 reply; 9+ messages in thread
From: Dylan Distasio @ 2011-05-11  0:38 UTC (permalink / raw)
  To: linux-raid

Hi Neil-

Just out of curiosity, how does mdadm decide which layout to use on a
reshape from RAID5->6.  I converted two of my RAID5s on different
boxes running the same OS awhile ago, and was not aware of the
different possibilities.   When I check now, one of them was converted
with the Q block all on the last disk, and the other appears
normalized.  I'm relatively confident I ran exactly the same command
on both to reshape them within a short time of one another.

Here are the current details of the two arrays:

dylan@terrordome:~$ sudo mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Mar  3 23:41:24 2009
     Raid Level : raid6
     Array Size : 5860559616 (5589.07 GiB 6001.21 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 0
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue May 10 20:06:42 2011
          State : active
 Active Devices : 8
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric-6
     Chunk Size : 64K

           UUID : 4891e7c1:5d7ec244:a9bd8edb:
d35467d0 (local to host terrordome)
         Events : 0.743956

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       97        2      active sync   /dev/sdg1
       3       8      113        3      active sync   /dev/sdh1
       4       8       17        4      active sync   /dev/sdb1
       5       8       65        5      active sync   /dev/sde1
       6       8      241        6      active sync   /dev/sdp1
       7      65       17        7      active sync   /dev/sdr1
dylan@terrordome:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.1 LTS
Release:        10.04
Codename:       lucid


dylan@rapture:~$ sudo mdadm -D /dev/md0

/dev/md0:
        Version : 0.90
  Creation Time : Sat Jun  7 02:54:05 2008
     Raid Level : raid6
     Array Size : 2194342080 (2092.69 GiB 2247.01 GB)
  Used Dev Size : 731447360 (697.56 GiB 749.00 GB)
   Raid Devices : 5
  Total Devices : 5
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Tue May 10 20:19:13 2011
          State : clean
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 83b4a7df:1d05f5fd:e368bf24:bd0fce41
         Events : 0.723556

    Number   Major   Minor   RaidDevice State
       0       8       18        0      active sync   /dev/sdb2
       1       8       34        1      active sync   /dev/sdc2
       2       8        2        2      active sync   /dev/sda2
       3       8       66        3      active sync   /dev/sde2
       4       8       82        4      active sync   /dev/sdf2

dylan@rapture:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.1 LTS
Release:        10.04
Codename:       lucid

On Tue, May 10, 2011 at 8:21 PM, NeilBrown <neilb@suse.de> wrote:
>
> On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh <netwiz@crc.id.au> wrote:
>
> > On 11/05/2011 9:31 AM, NeilBrown wrote:
> > > When it finished you will have a perfectly functional RAID6 array with full
> > > redundancy.  It might perform slightly differently to a standard layout -
> > > I've never performed any measurements to see how differently.
> > >
> > > If you want to (after the recovery completes) you could convert to a regular
> > > RAID6 with
> > >    mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device
> > >
> > > but you probably don't have to.
> > >
> >
> > This makes me wonder. How can one tell if the layout is 'normal' or with
> > Q blocks on a single device?
> >
> > I recently changed my array from RAID5->6. Mine created a backup file
> > and took just under 40 hours for 4 x 1Tb devices. I assume that this
> > means that data was reorganised to the standard RAID6 style? The
> > conversion was done at about 4-6Mb/sec.
>
> Probably.
>
> What is the 'layout' reported by "mdadm -D"?
> If it ends -6, then it is a RAID5 layout with the Q block all on the last
> disk.
> If not, then it is already normalised.
>
> >
> > Is there any effect on doing a --layout=normalise if the above happened?
> >
> Probably not.
>
> NeilBrown
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-11  0:38       ` Dylan Distasio
@ 2011-05-11  0:47         ` NeilBrown
  2011-05-11  1:04           ` Dylan Distasio
  0 siblings, 1 reply; 9+ messages in thread
From: NeilBrown @ 2011-05-11  0:47 UTC (permalink / raw)
  To: Dylan Distasio; +Cc: linux-raid

On Tue, 10 May 2011 20:38:11 -0400 Dylan Distasio <interzone@gmail.com> wrote:

> Hi Neil-
> 
> Just out of curiosity, how does mdadm decide which layout to use on a
> reshape from RAID5->6.  I converted two of my RAID5s on different
> boxes running the same OS awhile ago, and was not aware of the
> different possibilities.   When I check now, one of them was converted
> with the Q block all on the last disk, and the other appears
> normalized.  I'm relatively confident I ran exactly the same command
> on both to reshape them within a short time of one another.

mdadm first converts the RAID5 to RAID6 in an instant atomic operation which
results in the "-6" layout.  It then starts a restriping process which
converts the layout.

If you end up with a -6 layout then something when wrong starting the
restriping process.

Maybe you used different version of mdadm?  There have probably been bugs in
some versions..

NeilBrown



> 
> Here are the current details of the two arrays:
> 
> dylan@terrordome:~$ sudo mdadm -D /dev/md0
> /dev/md0:
>         Version : 0.90
>   Creation Time : Tue Mar  3 23:41:24 2009
>      Raid Level : raid6
>      Array Size : 5860559616 (5589.07 GiB 6001.21 GB)
>   Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
>    Raid Devices : 8
>   Total Devices : 8
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue May 10 20:06:42 2011
>           State : active
>  Active Devices : 8
> Working Devices : 8
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric-6
>      Chunk Size : 64K
> 
>            UUID : 4891e7c1:5d7ec244:a9bd8edb:
> d35467d0 (local to host terrordome)
>          Events : 0.743956
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       33        0      active sync   /dev/sdc1
>        1       8       49        1      active sync   /dev/sdd1
>        2       8       97        2      active sync   /dev/sdg1
>        3       8      113        3      active sync   /dev/sdh1
>        4       8       17        4      active sync   /dev/sdb1
>        5       8       65        5      active sync   /dev/sde1
>        6       8      241        6      active sync   /dev/sdp1
>        7      65       17        7      active sync   /dev/sdr1
> dylan@terrordome:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 10.04.1 LTS
> Release:        10.04
> Codename:       lucid
> 
> 
> dylan@rapture:~$ sudo mdadm -D /dev/md0
> 
> /dev/md0:
>         Version : 0.90
>   Creation Time : Sat Jun  7 02:54:05 2008
>      Raid Level : raid6
>      Array Size : 2194342080 (2092.69 GiB 2247.01 GB)
>   Used Dev Size : 731447360 (697.56 GiB 749.00 GB)
>    Raid Devices : 5
>   Total Devices : 5
> Preferred Minor : 0
>     Persistence : Superblock is persistent
> 
>     Update Time : Tue May 10 20:19:13 2011
>           State : clean
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            UUID : 83b4a7df:1d05f5fd:e368bf24:bd0fce41
>          Events : 0.723556
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       18        0      active sync   /dev/sdb2
>        1       8       34        1      active sync   /dev/sdc2
>        2       8        2        2      active sync   /dev/sda2
>        3       8       66        3      active sync   /dev/sde2
>        4       8       82        4      active sync   /dev/sdf2
> 
> dylan@rapture:~$ lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 10.04.1 LTS
> Release:        10.04
> Codename:       lucid
> 
> On Tue, May 10, 2011 at 8:21 PM, NeilBrown <neilb@suse.de> wrote:
> >
> > On Wed, 11 May 2011 09:39:27 +1000 Steven Haigh <netwiz@crc.id.au> wrote:
> >
> > > On 11/05/2011 9:31 AM, NeilBrown wrote:
> > > > When it finished you will have a perfectly functional RAID6 array with full
> > > > redundancy.  It might perform slightly differently to a standard layout -
> > > > I've never performed any measurements to see how differently.
> > > >
> > > > If you want to (after the recovery completes) you could convert to a regular
> > > > RAID6 with
> > > >    mdadm -G /dev/md0 --layout=normalise   --backup=/some/file/on/a/different/device
> > > >
> > > > but you probably don't have to.
> > > >
> > >
> > > This makes me wonder. How can one tell if the layout is 'normal' or with
> > > Q blocks on a single device?
> > >
> > > I recently changed my array from RAID5->6. Mine created a backup file
> > > and took just under 40 hours for 4 x 1Tb devices. I assume that this
> > > means that data was reorganised to the standard RAID6 style? The
> > > conversion was done at about 4-6Mb/sec.
> >
> > Probably.
> >
> > What is the 'layout' reported by "mdadm -D"?
> > If it ends -6, then it is a RAID5 layout with the Q block all on the last
> > disk.
> > If not, then it is already normalised.
> >
> > >
> > > Is there any effect on doing a --layout=normalise if the above happened?
> > >
> > Probably not.
> >
> > NeilBrown
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-11  0:47         ` NeilBrown
@ 2011-05-11  1:04           ` Dylan Distasio
  2011-05-11  3:29             ` NeilBrown
  0 siblings, 1 reply; 9+ messages in thread
From: Dylan Distasio @ 2011-05-11  1:04 UTC (permalink / raw)
  To: linux-raid

It's possible I did use different versions but I thought I had
upgraded both of them right before the reshapes.  Sorry if this is an
elementary question, but does writing the 2nd parity block always to
the last drive instead of rotating it increase the odds of a total
loss of the array since the one specific drive always has the 2nd
parity block?

If so, do you think normalizing would be worth the risk of something
going wrong with that operation?  I'm just trying to get a feel for
how much of a difference this makes.


> mdadm first converts the RAID5 to RAID6 in an instant atomic operation which
> results in the "-6" layout.  It then starts a restriping process which
> converts the layout.
>
> If you end up with a -6 layout then something when wrong starting the
> restriping process.
>
> Maybe you used different version of mdadm?  There have probably been bugs in
> some versions..
>
> NeilBrown
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: RAID5 -> RAID6 conversion, please help
  2011-05-11  1:04           ` Dylan Distasio
@ 2011-05-11  3:29             ` NeilBrown
  0 siblings, 0 replies; 9+ messages in thread
From: NeilBrown @ 2011-05-11  3:29 UTC (permalink / raw)
  To: Dylan Distasio; +Cc: linux-raid

On Tue, 10 May 2011 21:04:29 -0400 Dylan Distasio <interzone@gmail.com> wrote:

> It's possible I did use different versions but I thought I had
> upgraded both of them right before the reshapes.  Sorry if this is an
> elementary question, but does writing the 2nd parity block always to
> the last drive instead of rotating it increase the odds of a total
> loss of the array since the one specific drive always has the 2nd
> parity block?

No.  The only possibly impact is a performance impact, and even that would be
hard to quantify.

The reason that parity is rotated is to avoid a 'hot disk'.  Every update has
to write the parity block and if they are all on one disk then every write
will generate a write to that one disk.

Because of the way md implements RAID6, every write involves either a read or
a write to every device, so there is no real saving in rotating parity.

I think (but am open to being corrected) that rotating parity is only
important for RAID6 if the code implements 'subtraction' as well as
'addition' for the Q syndrome (which md doesn't) and if you have at least 5
drives, and you probably wouldn't notice until you get to 7 or more drives.

... so it might make sense to make mdadm default to converting to the -6
layout...
You can request it with "--layout=preserve".


> 
> If so, do you think normalizing would be worth the risk of something
> going wrong with that operation?  I'm just trying to get a feel for
> how much of a difference this makes.

Not worth the risk.

NeilBrown



> 
> 
> > mdadm first converts the RAID5 to RAID6 in an instant atomic operation which
> > results in the "-6" layout.  It then starts a restriping process which
> > converts the layout.
> >
> > If you end up with a -6 layout then something when wrong starting the
> > restriping process.
> >
> > Maybe you used different version of mdadm?  There have probably been bugs in
> > some versions..
> >
> > NeilBrown
> >
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2011-05-11  3:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-10 23:15 RAID5 -> RAID6 conversion, please help Peter Kovari
2011-05-10 23:31 ` NeilBrown
2011-05-10 23:39   ` Steven Haigh
2011-05-11  0:21     ` NeilBrown
2011-05-11  0:38       ` Dylan Distasio
2011-05-11  0:47         ` NeilBrown
2011-05-11  1:04           ` Dylan Distasio
2011-05-11  3:29             ` NeilBrown
2011-05-11  0:08   ` Peter Kovari

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).