From: "Kristleifur Daðason" <kristleifur@gmail.com>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid <linux-raid@vger.kernel.org>, Neil Brown <nfbrown@novell.com>
Subject: Re: mdadm 3.1.1: level change won't start
Date: Mon, 21 Dec 2009 23:18:55 +0000 [thread overview]
Message-ID: <73e903670912211518u69c26584y6c250e67f5ba06ad@mail.gmail.com> (raw)
In-Reply-To: <20091222095756.371c0ac4@notabene.brown>
On Mon, Dec 21, 2009 at 10:57 PM, Neil Brown <neilb@suse.de> wrote:
> On Mon, 21 Dec 2009 03:41:33 +0000
> Kristleifur Daðason <kristleifur@gmail.com> wrote:
>
>> Hi all,
>>
>> I wish to convert my 3-drive RAID-5 array to a 6-drive RAID-6. I'm on
>> Linux 2.6.32.2 and have mdadm version 3.1.1 with the 32-bit-array-size
>> patch from here: http://osdir.com/ml/linux-raid/2009-11/msg00534.html
>>
>> I have three live drives and three spares added to the array. When I
>> initialize the command, mdadm does the initial checks and aborts with
>> a "cannot set device shape" without doing anything to the array.
>>
>> Following are some md stats and growth command output:
>>
>> ___
>>
>> $ cat /proc/mdstat
>> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
>> [raid4] [raid10]
>> md_d1 : active raid5 sdd1[6](S) sdc1[5](S) sdb1[4](S) sdf1[1] sde1[0] sdl1[3]
>> 2930078720 blocks super 1.1 level 5, 256k chunk, algorithm 2 [3/3] [UUU]
>> bitmap: 1/350 pages [4KB], 2048KB chunk
>>
>> $ mdadm --detail --scan
>> ARRAY /dev/md/d1 metadata=1.01 spares=3 name=mamma:d1
>> UUID=da547022:042a6f68:d5fe251e:5e89f263
>>
>> $ mdadm --grow /dev/md_d1 --level=6 --raid-devices=6
>> --backup-file=/root/backup.md1_to_r6
>> mdadm: metadata format 1.10 unknown, ignored.
>> mdadm: metadata format 1.10 unknown, ignored.
>> mdadm level of /dev/md_d1 changed to raid6
>> mdadm: Need to backup 1024K of critical section..
>> mdadm: Cannot set device shape for /dev/md_d1
>> mdadm: aborting level change
>> ___
>>
>>
>> Three questions -
>>
>> 1. What does the stuff about "metadata format 1.10 unknown" mean?
>> Notice the "super 1.1" vs. "metadata 1.01" vs. "metadata format 1.10"
>> disrepancy between mdsat, --detail and --grow output.
>
> The metadata format .. unknown means that your /etc/mdadm.conf contains
> something like
> metadata=1.10
>
>>
>> 2. Am I doing something wrong? :)
>
> Not obviously.
>
>>
>> 3. How can I get more info about what is causing the failure to
>> initialize the growth?
>
> Look in the kernel logs. e.g.
> dmesg | tail -20
>
> immediately after the "mdadm --grow" attempt.
>
> I just tried the same thing and it worked for me.
>
> NeilBrown
>
Thank you very much for the reply. You were right, mdadm.conf indeed
contained metadata=1.10. I fixed it, updated the initramfs and
rebooted.
---
mdadm --detail --scan now gives:
sudo mdadm --detail --scan
ARRAY /dev/md/d1 metadata=1.01 spares=3 name=mamma:d1
UUID=da547022:042a6f68:d5fe251e:5e89f263
---
I tried the grow command again, and it aborts again. Could it be that
the device sizes are wrong? I thought I meticulously created exactly
identical partitions on each of the drives. The command output is:
sudo mdadm --grow /dev/md_d1 --level=6 --raid-devices=6
--backup-file=/root/backup.md1_to_r6
mdadm level of /dev/md_d1 changed to raid6
mdadm: Need to backup 1024K of critical section..
mdadm: Cannot set device shape for /dev/md_d1
mdadm: aborting level change
---
dmesg says:
[ 96.482937] raid5: device sdl1 operational as raid disk 2
[ 96.482940] raid5: device sdf1 operational as raid disk 1
[ 96.482942] raid5: device sde1 operational as raid disk 0
[ 96.483299] raid5: allocated 4282kB for md_d1
[ 96.511577] 2: w=1 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 96.511581] 1: w=2 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 96.511583] 0: w=3 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 96.511585] raid5: raid level 6 set md_d1 active with 3 out of 4
devices, algorithm 18
[ 96.511588] RAID5 conf printout:
[ 96.511589] --- rd:4 wd:3
[ 96.511591] disk 0, o:1, dev:sde1
[ 96.511593] disk 1, o:1, dev:sdf1
[ 96.511595] disk 2, o:1, dev:sdl1
[ 96.671315] raid5: device sdl1 operational as raid disk 2
[ 96.671318] raid5: device sdf1 operational as raid disk 1
[ 96.671320] raid5: device sde1 operational as raid disk 0
[ 96.671642] raid5: allocated 3230kB for md_d1
[ 96.720331] 2: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 96.720334] 1: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 96.720336] 0: w=3 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 96.720338] raid5: raid level 5 set md_d1 active with 3 out of 3
devices, algorithm 2
[ 96.720340] RAID5 conf printout:
[ 96.720341] --- rd:3 wd:3
[ 96.720343] disk 0, o:1, dev:sde1
[ 96.720345] disk 1, o:1, dev:sdf1
[ 96.720346] disk 2, o:1, dev:sdl1
[ 100.202834] raid5: device sdl1 operational as raid disk 2
[ 100.202837] raid5: device sdf1 operational as raid disk 1
[ 100.202839] raid5: device sde1 operational as raid disk 0
[ 100.203194] raid5: allocated 4282kB for md_d1
[ 100.241576] 2: w=1 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 100.241579] 1: w=2 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 100.241582] 0: w=3 pa=0 pr=4 m=2 a=18 r=4 op1=0 op2=0
[ 100.241584] raid5: raid level 6 set md_d1 active with 3 out of 4
devices, algorithm 18
[ 100.241586] RAID5 conf printout:
[ 100.241588] --- rd:4 wd:3
[ 100.241590] disk 0, o:1, dev:sde1
[ 100.241592] disk 1, o:1, dev:sdf1
[ 100.241593] disk 2, o:1, dev:sdl1
[ 100.401030] raid5: device sdl1 operational as raid disk 2
[ 100.401033] raid5: device sdf1 operational as raid disk 1
[ 100.401035] raid5: device sde1 operational as raid disk 0
[ 100.401348] raid5: allocated 3230kB for md_d1
[ 100.460458] 2: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 100.460461] 1: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 100.460463] 0: w=3 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 100.460466] raid5: raid level 5 set md_d1 active with 3 out of 3
devices, algorithm 2
[ 100.460467] RAID5 conf printout:
[ 100.460468] --- rd:3 wd:3
[ 100.460470] disk 0, o:1, dev:sde1
[ 100.460472] disk 1, o:1, dev:sdf1
[ 100.460474] disk 2, o:1, dev:sdl1
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-12-21 23:18 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-21 3:41 mdadm 3.1.1: level change won't start Kristleifur Daðason
2009-12-21 4:10 ` Michael Evans
2009-12-21 4:39 ` Kristleifur Daðason
2009-12-21 7:25 ` Michael Evans
2009-12-23 23:47 ` Leslie Rhorer
2009-12-30 5:25 ` Leslie Rhorer
2009-12-30 5:37 ` Neil Brown
2009-12-30 5:43 ` Leslie Rhorer
2009-12-21 9:35 ` Robin Hill
2009-12-21 13:34 ` Kristleifur Daðason
2009-12-21 23:00 ` Neil Brown
2009-12-21 22:57 ` Neil Brown
2009-12-21 23:18 ` Kristleifur Daðason [this message]
2009-12-22 18:35 ` Kristleifur Daðason
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=73e903670912211518u69c26584y6c250e67f5ba06ad@mail.gmail.com \
--to=kristleifur@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=nfbrown@novell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).