From: NeilBrown <neilb@suse.de>
To: Peter Kovari <peter@kovari.priv.hu>
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID5 -> RAID6 conversion, please help
Date: Wed, 11 May 2011 09:31:55 +1000 [thread overview]
Message-ID: <20110511093155.5b1a203e@notabene.brown> (raw)
In-Reply-To: <002a01cc0f68$1c851180$558f3480$@priv.hu>
On Wed, 11 May 2011 01:15:11 +0200 "Peter Kovari" <peter@kovari.priv.hu>
wrote:
> Dear all,
>
> I tried to convert my existing 5 disks RAID5 array to a 6 disks RAID6 array.
> This was my existing array:
> ----------------------------------------------------------------------------
> -----------------------------------
> /dev/md0:
> Version : 0.90
> Raid Level : raid5
> Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> Raid Devices : 5
> Total Devices : 5
> Persistence : Superblock is persistent
> State : clean
> Active Devices : 5
> Working Devices : 5
> Layout : left-symmetric
> Chunk Size : 512K
> Events : 0.156
>
> Number Major Minor RaidDevice State
> 0 8 17 0 active sync /dev/sdb1
> 1 8 81 1 active sync /dev/sdf1
> 2 8 33 2 active sync /dev/sdc1
> 3 8 97 3 active sync /dev/sdg1
> 4 8 65 4 active sync /dev/sde1
> ----------------------------------------------------------------------------
> -------------------------------
>
> I did the conversion according to "howtos", so:
> $ mdadm -add /dev/md0 /dev/sdd1
> then:
> $ mdadm --grow /dev/md0 --level=6 --raid-devices=6
> --backup-file=/mnt/mdadm-raid5-to-raid6.backup
>
> Instead of starting the reshape process, mdadm responded this:
> mdadm: /dev/md0: changed level to 6 (or something like that, i dont remember
> the exact words, but it was about changing the level).
> mdadm: /dev/md0: Cannot get array details from sysfs
>
> And the array became this:
> ----------------------------------------------------------------------------
> -------------------------------
> /dev/md0:
> Raid Level : raid6
> Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
> State : clean, degraded
> Active Devices : 5
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 1
> Events : 0.170
>
> Number Major Minor RaidDevice State
> 0 8 17 0 active sync /dev/sdb1
> 1 8 81 1 active sync /dev/sdf1
> 2 8 33 2 active sync /dev/sdc1
> 3 8 97 3 active sync /dev/sdg1
> 4 8 65 4 active sync /dev/sde1
> 5 0 0 5 removed
> 6 8 49 - spare /dev/sdd1
> ----------------------------------------------------------------------------
> -------------------------------
>
> At this point I realized that /dev/sdd previously was a member of another
> raid array in an other machine, and however I re-partitioned the disk, I
> didn't remove the old superblock. So maybe this was the reason for the mdadm
> error. Since the state of /dev/sdd1 was spare, i removed it:
>
> $ mdadm -remove /dev/md0 /dev/sdd1
>
> then cleared remaining superblock
> $ mdadm --zero-superblock /dev/sdd1
>
> then added it back to the array:
> mdadm --add /dev/md0 /dev/sdd1
>
> and started the grow process again:
> $ mdadm --grow /dev/md0 --level=6 --raid-devices=6
> --backup-file=/mnt/mdadm-raid5-to-raid6.backup
> mdadm: /dev/md0: no change requested
>
> Mdadm stated no change, however, it started to rebuild the array. It's
> currently rebuilding:
> ----------------------------------------------------------------------------
> -------------------------------
> /dev/md0:
> Version : 0.90
> Raid Level : raid6
> Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
> Used Dev Size : 1465137152 (1397.26 GiB 1500.30 GB)
> Raid Devices : 6
> Total Devices : 6
> Persistence : Superblock is persistent
> State : clean, degraded, recovering
> Active Devices : 5
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 1
> Layout : left-symmetric-6
> Chunk Size : 512K
> Rebuild Status : 2% complete
> Events : 0.186
>
> Number Major Minor RaidDevice State
> 0 8 17 0 active sync /dev/sdb1
> 1 8 81 1 active sync /dev/sdf1
> 2 8 33 2 active sync /dev/sdc1
> 3 8 97 3 active sync /dev/sdg1
> 4 8 65 4 active sync /dev/sde1
> 6 8 49 5 spare rebuilding /dev/sdd1
> ----------------------------------------------------------------------------
> -------------------------------
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
> [raid10]
> md0 : active raid6 sdd1[6] sde1[4] sdc1[2] sdf1[1] sdg1[3] sdb1[0]
> 5860548608 blocks level 6, 512k chunk, algorithm 18 [6/5] [UUUUU_]
> [>....................] recovery = 2.3% (34438272/1465137152)
> finish=1074.5min speed=22190K/sec
>
> unused devices: <none>
> ----------------------------------------------------------------------------
> -------------------------------
>
> Mdadm didn't create the backup file, and the process seems too fast to me
> for a raid5->raid6 conversion.
> Please help me to understand what's happening now.
You have a RAID6 array in a non-standard config where there Q block (the
second parity block) is always on the last device rather than rotated around
the various devices.
The array is simply recovering that 6th drive to the spare.
When it finished you will have a perfectly functional RAID6 array with full
redundancy. It might perform slightly differently to a standard layout -
I've never performed any measurements to see how differently.
If you want to (after the recovery completes) you could convert to a regular
RAID6 with
mdadm -G /dev/md0 --layout=normalise --backup=/some/file/on/a/different/device
but you probably don't have to.
The old meta on sdd will not have been a problem.
What version of mdadm did you use to try to start the reshape?
NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-05-10 23:31 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-10 23:15 RAID5 -> RAID6 conversion, please help Peter Kovari
2011-05-10 23:31 ` NeilBrown [this message]
2011-05-10 23:39 ` Steven Haigh
2011-05-11 0:21 ` NeilBrown
2011-05-11 0:38 ` Dylan Distasio
2011-05-11 0:47 ` NeilBrown
2011-05-11 1:04 ` Dylan Distasio
2011-05-11 3:29 ` NeilBrown
2011-05-11 0:08 ` Peter Kovari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110511093155.5b1a203e@notabene.brown \
--to=neilb@suse.de \
--cc=linux-raid@vger.kernel.org \
--cc=peter@kovari.priv.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).