* Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
@ 2012-06-26 13:57 Stefan G. Weichinger
2012-06-27 10:17 ` Stefan G. Weichinger
2012-06-28 6:32 ` NeilBrown
0 siblings, 2 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-26 13:57 UTC (permalink / raw)
To: linux-raid
Greets,
I currently try to re-add 2 disks to a RAID6 array.
There were 4 disks in a Fujitsu Q800 NAS, the RAID6 was done via WebGUI.
2 disks have been removed from the array and I am not able to re-add the
old disks or add new disks via WebGUI.
The support told me to "re-insert disks, if it doesn't work, rebuild
array" ... cool. What do I need a RAID for then ??
Anway.
Entered hacking mode ;-) at least in my terms.
sshed into box.
To get it short:
Currently the RAID6 array /dev/md0 is:
md0 : active raid6 sdc3[2] sdd3[3]
3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
So I would like to re-add sda3 and sdb3 ...
I get:
# mdadm /dev/md0 -a /dev/sda3
mdadm: /dev/sda3 not large enough to join array
oops!
But the comparison shows:
[~] # fdisk -l /dev/sda
Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 66 530125 83 Linux
/dev/sda2 67 132 530142 83 Linux
/dev/sda3 133 243138 1951945693 83 Linux
/dev/sda4 243139 243200 498012 83 Linux
[~] # fdisk -l /dev/sdc
Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 66 530125 83 Linux
/dev/sdc2 67 132 530142 83 Linux
/dev/sdc3 133 243138 1951945693 83 Linux
/dev/sdc4 243139 243200 498012 83 Linux
-> identical partitions
---
Could that relate to this issue:
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500309
The NAS seems to run some ubuntu:
# cat /proc/version
Linux version 2.6.33.2 (root@NasX86-5) (gcc version 4.1.3 20070929
(prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1 SMP Mon Sep 13 04:28:32 CST 2010
and brings an older mdadm:
# mdadm --version
mdadm - v2.6.3 - 20th August 2007
If that is the issue, is there a way to use some newer binary (magically
transferred to me by mail or URL ;-) ) of mdadm to re-add disks?
I would really really like to avoid to rebuild that array ...
Thanks in advance, looking forward to your hints, Stefan!
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-26 13:57 Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm? Stefan G. Weichinger
@ 2012-06-27 10:17 ` Stefan G. Weichinger
2012-06-27 11:34 ` Stefan G. Weichinger
2012-06-28 6:32 ` NeilBrown
1 sibling, 1 reply; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-27 10:17 UTC (permalink / raw)
To: linux-raid
Am 26.06.2012 15:57, schrieb Stefan G. Weichinger:
> I would really really like to avoid to rebuild that array ...
>
> Thanks in advance, looking forward to your hints, Stefan!
Noone any thoughts on this?
Correction: thoughts alone wouldn't write a posting ;-)
S
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-27 10:17 ` Stefan G. Weichinger
@ 2012-06-27 11:34 ` Stefan G. Weichinger
2012-06-27 11:38 ` Stefan G. Weichinger
0 siblings, 1 reply; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-27 11:34 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 404 bytes --]
Am 27.06.2012 12:17, schrieb Stefan G. Weichinger:
> Am 26.06.2012 15:57, schrieb Stefan G. Weichinger:
>
>> I would really really like to avoid to rebuild that array ...
>>
>> Thanks in advance, looking forward to your hints, Stefan!
>
> Noone any thoughts on this?
> Correction: thoughts alone wouldn't write a posting ;-)
more infos in attachment (will it get, as requested off-list.
thanks ...
[-- Attachment #2: report --]
[-- Type: text/plain, Size: 4680 bytes --]
/dev/sda3:
Magic : a92b4efc
Version : 00.90.00
UUID : 938aa1a7:a216603a:d46d31fa:4faee42a
Creation Time : Thu Aug 11 21:19:43 2011
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 3903891200 (3723.04 GiB 3997.58 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Fri Nov 4 00:13:42 2011
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 143cd05d - correct
Events : 0.248094
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 19 1 active sync /dev/sdb3
0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
/dev/sdb3:
Magic : a92b4efc
Version : 00.90.00
UUID : 938aa1a7:a216603a:d46d31fa:4faee42a
Creation Time : Thu Aug 11 21:19:43 2011
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 3903891200 (3723.04 GiB 3997.58 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Fri Nov 4 00:13:42 2011
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Checksum : 143cd04b - correct
Events : 0.248094
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 3 0 active sync /dev/sda3
0 0 8 3 0 active sync /dev/sda3
1 1 8 19 1 active sync /dev/sdb3
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
/dev/sdc3:
Magic : a92b4efc
Version : 00.90.00
UUID : 938aa1a7:a216603a:d46d31fa:4faee42a
Creation Time : Thu Aug 11 21:19:43 2011
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 3903891200 (3723.04 GiB 3997.58 GB)
Raid Devices : 4
Total Devices : 2
Preferred Minor : 0
Update Time : Wed Jun 27 13:32:30 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Checksum : 15f80804 - correct
Events : 0.4430040
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 35 2 active sync /dev/sdc3
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 00.90.00
UUID : 938aa1a7:a216603a:d46d31fa:4faee42a
Creation Time : Thu Aug 11 21:19:43 2011
Raid Level : raid6
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Array Size : 3903891200 (3723.04 GiB 3997.58 GB)
Raid Devices : 4
Total Devices : 2
Preferred Minor : 0
Update Time : Wed Jun 27 13:32:30 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 1
Spare Devices : 0
Checksum : 15f80816 - correct
Events : 0.4430040
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 51 3 active sync /dev/sdd3
0 0 0 0 0 removed
1 1 0 0 1 faulty removed
2 2 8 35 2 active sync /dev/sdc3
3 3 8 51 3 active sync /dev/sdd3
/dev/md0:
Version : 00.90.03
Creation Time : Thu Aug 11 21:19:43 2011
Raid Level : raid6
Array Size : 3903891200 (3723.04 GiB 3997.58 GB)
Used Dev Size : 1951945600 (1861.52 GiB 1998.79 GB)
Raid Devices : 4
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Wed Jun 27 13:32:30 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 64K
UUID : 938aa1a7:a216603a:d46d31fa:4faee42a
Events : 0.4430040
Number Major Minor RaidDevice State
0 0 0 0 removed
1 0 0 1 removed
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-26 13:57 Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm? Stefan G. Weichinger
2012-06-27 10:17 ` Stefan G. Weichinger
@ 2012-06-28 6:32 ` NeilBrown
2012-06-28 8:59 ` Stefan G. Weichinger
1 sibling, 1 reply; 17+ messages in thread
From: NeilBrown @ 2012-06-28 6:32 UTC (permalink / raw)
To: lists; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3515 bytes --]
On Tue, 26 Jun 2012 15:57:21 +0200 "Stefan G. Weichinger" <lists@xunil.at>
wrote:
>
> Greets,
>
> I currently try to re-add 2 disks to a RAID6 array.
>
> There were 4 disks in a Fujitsu Q800 NAS, the RAID6 was done via WebGUI.
>
> 2 disks have been removed from the array and I am not able to re-add the
> old disks or add new disks via WebGUI.
>
> The support told me to "re-insert disks, if it doesn't work, rebuild
> array" ... cool. What do I need a RAID for then ??
>
> Anway.
>
> Entered hacking mode ;-) at least in my terms.
>
> sshed into box.
>
> To get it short:
>
> Currently the RAID6 array /dev/md0 is:
>
> md0 : active raid6 sdc3[2] sdd3[3]
> 3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
>
> So I would like to re-add sda3 and sdb3 ...
>
> I get:
>
> # mdadm /dev/md0 -a /dev/sda3
> mdadm: /dev/sda3 not large enough to join array
>
> oops!
>
> But the comparison shows:
>
> [~] # fdisk -l /dev/sda
>
> Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 66 530125 83 Linux
> /dev/sda2 67 132 530142 83 Linux
> /dev/sda3 133 243138 1951945693 83 Linux
> /dev/sda4 243139 243200 498012 83 Linux
>
> [~] # fdisk -l /dev/sdc
>
> Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
> 255 heads, 63 sectors/track, 243201 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 1 66 530125 83 Linux
> /dev/sdc2 67 132 530142 83 Linux
> /dev/sdc3 133 243138 1951945693 83 Linux
> /dev/sdc4 243139 243200 498012 83 Linux
>
> -> identical partitions
>
> ---
>
> Could that relate to this issue:
>
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500309
Nope. It was an earlier bug fixed in 2.6.5 by
http://neil.brown.name/git?p=mdadm;a=commitdiff;h=7a3be72fc621b4a7589e923cf065
>
> The NAS seems to run some ubuntu:
>
> # cat /proc/version
> Linux version 2.6.33.2 (root@NasX86-5) (gcc version 4.1.3 20070929
> (prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1 SMP Mon Sep 13 04:28:32 CST 2010
Does the "X86" in there suggest and x86 processor? What does "uname -a" show?
>
> and brings an older mdadm:
>
> # mdadm --version
> mdadm - v2.6.3 - 20th August 2007
>
>
> If that is the issue, is there a way to use some newer binary (magically
> transferred to me by mail or URL ;-) ) of mdadm to re-add disks?
You mean the NAS didn't come with a complete build environment and sources
for all programs? Outrageous.
If you have a machine with the same arch at the NAS, you could
git clone git://neil.brown.name/mdadm -b mdadm-2.6.5
cd mdadm
make mdadm.static CWFLAGS=-Wall
and then use the "mdadm.static" on the NAS.
NeilBrown
>
> I would really really like to avoid to rebuild that array ...
>
> Thanks in advance, looking forward to your hints, Stefan!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 6:32 ` NeilBrown
@ 2012-06-28 8:59 ` Stefan G. Weichinger
2012-06-28 9:14 ` Stefan G. Weichinger
2012-06-28 9:39 ` NeilBrown
0 siblings, 2 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 8:59 UTC (permalink / raw)
To: linux-raid
Am 28.06.2012 08:32, schrieb NeilBrown:
> On Tue, 26 Jun 2012 15:57:21 +0200 "Stefan G. Weichinger"
> <lists@xunil.at> wrote:
>> Could that relate to this issue:
>>
>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500309
>
> Nope. It was an earlier bug fixed in 2.6.5 by
>
> http://neil.brown.name/git?p=mdadm;a=commitdiff;h=7a3be72fc621b4a7589e923cf065
Ah,
>
I see. So this *could* be solved by using a more recent mdadm,
correct?
>> The NAS seems to run some ubuntu:
>>
>> # cat /proc/version Linux version 2.6.33.2 (root@NasX86-5) (gcc
>> version 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1
>> SMP Mon Sep 13 04:28:32 CST 2010
>
> Does the "X86" in there suggest and x86 processor? What does
> "uname -a" show?
# uname -a
Linux NASCA0A00 2.6.33.2 #1 SMP Mon Sep 13 04:28:32 CST 2010 i686 unknown
# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 28
model name : Intel(R) Atom(TM) CPU D525 @ 1.80GHz
stepping : 10
cpu MHz : 1795.730
cache size : 512 KB
[...]
> You mean the NAS didn't come with a complete build environment and
> sources for all programs? Outrageous.
*sigh*
> If you have a machine with the same arch at the NAS, you could
>
> git clone git://neil.brown.name/mdadm -b mdadm-2.6.5 cd mdadm make
> mdadm.static CWFLAGS=-Wall
>
> and then use the "mdadm.static" on the NAS.
Ok, thanks. I could boot some ubuntu on my Atom-netbook .. or what?
Would it be enough to match the 64bit-environment, or would I have to
use something with the same kernel ... ?
Thanks! Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 8:59 ` Stefan G. Weichinger
@ 2012-06-28 9:14 ` Stefan G. Weichinger
2012-06-28 9:23 ` Stefan G. Weichinger
2012-06-28 11:22 ` NeilBrown
2012-06-28 9:39 ` NeilBrown
1 sibling, 2 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 9:14 UTC (permalink / raw)
To: linux-raid
Am 28.06.2012 10:59, schrieb Stefan G. Weichinger:
>> If you have a machine with the same arch at the NAS, you could
>>
>> git clone git://neil.brown.name/mdadm -b mdadm-2.6.5 cd mdadm make
>> mdadm.static CWFLAGS=-Wall
>>
>> and then use the "mdadm.static" on the NAS.
>
> Ok, thanks. I could boot some ubuntu on my Atom-netbook .. or what?
> Would it be enough to match the 64bit-environment, or would I have to
> use something with the same kernel ... ?
Update:
compiled mdadm.static on a 32bit machine and could re-add the two
partitions to the array! cool! thanks, Neil!
~500 mins to wait now.
Do I have to fear read-errors with RAID5 now?
I still don't fully understand if there are also 2 bits of
parity-informations available in a degraded RAID6 array on 2 disks only.
Thanks again so far, hoping the best now ;-)
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 9:14 ` Stefan G. Weichinger
@ 2012-06-28 9:23 ` Stefan G. Weichinger
2012-06-28 11:22 ` NeilBrown
1 sibling, 0 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 9:23 UTC (permalink / raw)
To: linux-raid
Am 28.06.2012 11:14, schrieb Stefan G. Weichinger:
> Do I have to fear read-errors with RAID5 now?
correction "AS with RAID5" ...
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 9:14 ` Stefan G. Weichinger
2012-06-28 9:23 ` Stefan G. Weichinger
@ 2012-06-28 11:22 ` NeilBrown
2012-06-28 15:56 ` Stefan G. Weichinger
1 sibling, 1 reply; 17+ messages in thread
From: NeilBrown @ 2012-06-28 11:22 UTC (permalink / raw)
To: lists; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1785 bytes --]
On Thu, 28 Jun 2012 11:14:25 +0200 "Stefan G. Weichinger" <lists@xunil.at>
wrote:
> Am 28.06.2012 10:59, schrieb Stefan G. Weichinger:
>
> >> If you have a machine with the same arch at the NAS, you could
> >>
> >> git clone git://neil.brown.name/mdadm -b mdadm-2.6.5 cd mdadm make
> >> mdadm.static CWFLAGS=-Wall
> >>
> >> and then use the "mdadm.static" on the NAS.
> >
> > Ok, thanks. I could boot some ubuntu on my Atom-netbook .. or what?
> > Would it be enough to match the 64bit-environment, or would I have to
> > use something with the same kernel ... ?
>
> Update:
>
> compiled mdadm.static on a 32bit machine and could re-add the two
> partitions to the array! cool! thanks, Neil!
>
> ~500 mins to wait now.
>
> Do I have to fear read-errors as with RAID5 now?
If you get a read error, then that block in the new devices cannot be
recovered, so the recovery will abort. But you have nothing to fear except
fear itself :-)
>
> I still don't fully understand if there are also 2 bits of
> parity-informations available in a degraded RAID6 array on 2 disks only.
In a 4-drive RAID6 like yours, each stripe contains 2 data blocks and 2
parity blocks (Called 'P' and 'Q').
When two devices are failed/missing, some stripes will have 2 data blocks and
no parity, some will have both parity blocks and no data (but can of course
compute the data blocks from the parity blocks). Some will have one of each.
Does that answer the question?
NeilBrown
>
> Thanks again so far, hoping the best now ;-)
>
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 11:22 ` NeilBrown
@ 2012-06-28 15:56 ` Stefan G. Weichinger
2012-06-28 18:25 ` Stefan G. Weichinger
2012-06-28 21:39 ` NeilBrown
0 siblings, 2 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 15:56 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Am 28.06.2012 13:22, schrieb NeilBrown:
>> Do I have to fear read-errors as with RAID5 now?
>
> If you get a read error, then that block in the new devices cannot
> be recovered, so the recovery will abort. But you have nothing to
> fear except fear itself :-)
Ah, yes. Not exactly raid-specific, but I agree ;-) (we have a poem by
Mischa Kaleko in german reflecting this, btw ...)
So if there is one non-readable block on the 2 disks I started with
(the degraded array) the recovery will fail?
As sd[ab]3 were part of the array earlier, would that mean that maybe
they bring the missing bit, just in case?
>> I still don't fully understand if there are also 2 bits of
>> parity-informations available in a degraded RAID6 array on 2
>> disks only.
>
> In a 4-drive RAID6 like yours, each stripe contains 2 data blocks
> and 2 parity blocks (Called 'P' and 'Q'). When two devices are
> failed/missing, some stripes will have 2 data blocks and no parity,
> some will have both parity blocks and no data (but can of course
> compute the data blocks from the parity blocks). Some will have one
> of each.
>
> Does that answer the question?
Yes, it does.
But ... I still don't fully understand it :-P
What I want to understand and know:
There is this issue with RAID5: resyncing the array after swapping a
failed disk for a new one stresses the old drives, and if there is one
read-problem on them the whole array blows up.
As far as I read RAID6 protects me against this because of the 2
parity blocks (instead of one) because it is much more unlikely that I
can't read both of them, right?
Does this apply to only a N-1 degraded RAID6 or also an N-2 degraded
array? As far as I understand, it is correct for both cases.
-
I faced this RAID5-related problem 2 times already (breaking the array
...) and therefore started to use RAID6 for the servers I deploy,
mostly using 4 disks, sometimes 6 or 8.
If this doesn't really protect things better, I should rethink that,
maybe.
-
Right now my recovery still needs around 80mins to go:
md0 : active raid6 sdb3[4](S) sda3[5] sdc3[2] sdd3[3]
3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
[================>....] recovery = 83.0%
(1621636224/1951945600) finish=81.5min speed=67477K/sec
I assume it is OK in this state of things that sdb3 is marked as
(S)pare ...
Thanks, greetings, Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 15:56 ` Stefan G. Weichinger
@ 2012-06-28 18:25 ` Stefan G. Weichinger
2012-06-28 21:36 ` NeilBrown
2012-06-28 21:39 ` NeilBrown
1 sibling, 1 reply; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 18:25 UTC (permalink / raw)
To: lists; +Cc: NeilBrown, linux-raid
Am 28.06.2012 17:56, schrieb Stefan G. Weichinger:
> md0 : active raid6 sdb3[4](S) sda3[5] sdc3[2] sdd3[3]
> 3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
> [================>....] recovery = 83.0%
> (1621636224/1951945600) finish=81.5min speed=67477K/sec
>
> I assume it is OK in this state of things that sdb3 is marked as
> (S)pare ...
It seems so, as now it has entered the next stage:
md0 : active raid6 sdb3[4] sda3[0] sdc3[2] sdd3[3]
3903891200 blocks level 6, 64k chunk, algorithm 2 [4/3] [U_UU]
[=>...................] recovery = 6.2% (122751744/1951945600)
finish=784.6min speed=38854K/sec
Somewhat slower, but no (S)pare there anymore.
What is the logic behind that?
What does it do exactly when it re-adds the first disk, what in the
second round?
Should I have added sd[ab]3 in one command?
To me it also seems that I now have good redundancy again already, correct?
Sorry for all my questions ;-)
I just like to understand things, at least on my user-level.
Stefan
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 18:25 ` Stefan G. Weichinger
@ 2012-06-28 21:36 ` NeilBrown
2012-06-29 8:18 ` Stefan G. Weichinger
0 siblings, 1 reply; 17+ messages in thread
From: NeilBrown @ 2012-06-28 21:36 UTC (permalink / raw)
To: lists; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1818 bytes --]
On Thu, 28 Jun 2012 20:25:44 +0200 "Stefan G. Weichinger" <lists@xunil.at>
wrote:
> Am 28.06.2012 17:56, schrieb Stefan G. Weichinger:
>
> > md0 : active raid6 sdb3[4](S) sda3[5] sdc3[2] sdd3[3]
> > 3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
> > [================>....] recovery = 83.0%
> > (1621636224/1951945600) finish=81.5min speed=67477K/sec
> >
> > I assume it is OK in this state of things that sdb3 is marked as
> > (S)pare ...
>
> It seems so, as now it has entered the next stage:
>
> md0 : active raid6 sdb3[4] sda3[0] sdc3[2] sdd3[3]
> 3903891200 blocks level 6, 64k chunk, algorithm 2 [4/3] [U_UU]
> [=>...................] recovery = 6.2% (122751744/1951945600)
> finish=784.6min speed=38854K/sec
>
> Somewhat slower, but no (S)pare there anymore.
>
> What is the logic behind that?
As you have guessed, it first recovered one device, then recovered the second
one.
But it looks like there are no read errors on the two good devices, so
fear-not.
>
> What does it do exactly when it re-adds the first disk, what in the
> second round?
>
> Should I have added sd[ab]3 in one command?
Had you done that with a very new mdadm, it would have recovered both at once.
mdadm has to say:
- disable recovery for now
- here is one new spare
- here is another spare
- ok, you can try recovery now
otherwise as soon as it gets one spare it will start recovery.
>
> To me it also seems that I now have good redundancy again already, correct?
Correct. You have single redundancy and in about 10 hours since your email
you'll have double redundancy.
>
> Sorry for all my questions ;-)
> I just like to understand things, at least on my user-level.
>
> Stefan
No problem.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 21:36 ` NeilBrown
@ 2012-06-29 8:18 ` Stefan G. Weichinger
2012-07-02 8:30 ` Stefan G. Weichinger
0 siblings, 1 reply; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-29 8:18 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Am 28.06.2012 23:36, schrieb NeilBrown:
> On Thu, 28 Jun 2012 20:25:44 +0200 "Stefan G. Weichinger"
> <lists@xunil.at> wrote:
>> What is the logic behind that?
>
> As you have guessed, it first recovered one device, then recovered
> the second one. But it looks like there are no read errors on the
> two good devices, so fear-not.
Good to know.
>> What does it do exactly when it re-adds the first disk, what in
>> the second round?
>>
>> Should I have added sd[ab]3 in one command?
>
> Had you done that with a very new mdadm, it would have recovered
> both at once. mdadm has to say: - disable recovery for now - here
> is one new spare - here is another spare - ok, you can try recovery
> now
>
> otherwise as soon as it gets one spare it will start recovery.
Thanks for the explanation.
>> To me it also seems that I now have good redundancy again
>> already, correct?
>
> Correct. You have single redundancy and in about 10 hours since
> your email you'll have double redundancy.
It is still rebuilding, must have been slowed down by some processes
using the filesystem at night. But still working.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 15:56 ` Stefan G. Weichinger
2012-06-28 18:25 ` Stefan G. Weichinger
@ 2012-06-28 21:39 ` NeilBrown
1 sibling, 0 replies; 17+ messages in thread
From: NeilBrown @ 2012-06-28 21:39 UTC (permalink / raw)
To: lists; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 3144 bytes --]
On Thu, 28 Jun 2012 17:56:39 +0200 "Stefan G. Weichinger" <lists@xunil.at>
wrote:
> Am 28.06.2012 13:22, schrieb NeilBrown:
>
> >> Do I have to fear read-errors as with RAID5 now?
> >
> > If you get a read error, then that block in the new devices cannot
> > be recovered, so the recovery will abort. But you have nothing to
> > fear except fear itself :-)
>
> Ah, yes. Not exactly raid-specific, but I agree ;-) (we have a poem by
> Mischa Kaleko in german reflecting this, btw ...)
>
> So if there is one non-readable block on the 2 disks I started with
> (the degraded array) the recovery will fail?
>
> As sd[ab]3 were part of the array earlier, would that mean that maybe
> they bring the missing bit, just in case?
>
>
> >> I still don't fully understand if there are also 2 bits of
> >> parity-informations available in a degraded RAID6 array on 2
> >> disks only.
> >
> > In a 4-drive RAID6 like yours, each stripe contains 2 data blocks
> > and 2 parity blocks (Called 'P' and 'Q'). When two devices are
> > failed/missing, some stripes will have 2 data blocks and no parity,
> > some will have both parity blocks and no data (but can of course
> > compute the data blocks from the parity blocks). Some will have one
> > of each.
> >
> > Does that answer the question?
>
> Yes, it does.
>
> But ... I still don't fully understand it :-P
>
> What I want to understand and know:
>
> There is this issue with RAID5: resyncing the array after swapping a
> failed disk for a new one stresses the old drives, and if there is one
> read-problem on them the whole array blows up.
>
> As far as I read RAID6 protects me against this because of the 2
> parity blocks (instead of one) because it is much more unlikely that I
> can't read both of them, right?
Right.
>
> Does this apply to only a N-1 degraded RAID6 or also an N-2 degraded
> array? As far as I understand, it is correct for both cases.
Only an N-1 degraded array.
An N-2 degraded RAID6 is much like an N-1 degraded RAID5 and would suffer the
same fate on a read error during recovery.
>
> -
>
> I faced this RAID5-related problem 2 times already (breaking the array
> ...) and therefore started to use RAID6 for the servers I deploy,
> mostly using 4 disks, sometimes 6 or 8.
>
> If this doesn't really protect things better, I should rethink that,
> maybe.
Your current array had lost 2 drives. If it had been a RAID5 you would be
wishing you had better backups right now. so I think RAID6 really does
provide better protection :-) However it isn't perfect - it cannot protect
against concurrent failures on 3 drives...
NeilBrown
>
> -
>
> Right now my recovery still needs around 80mins to go:
>
> md0 : active raid6 sdb3[4](S) sda3[5] sdc3[2] sdd3[3]
> 3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
> [================>....] recovery = 83.0%
> (1621636224/1951945600) finish=81.5min speed=67477K/sec
>
> I assume it is OK in this state of things that sdb3 is marked as
> (S)pare ...
>
> Thanks, greetings, Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 8:59 ` Stefan G. Weichinger
2012-06-28 9:14 ` Stefan G. Weichinger
@ 2012-06-28 9:39 ` NeilBrown
2012-06-28 9:42 ` Stefan G. Weichinger
1 sibling, 1 reply; 17+ messages in thread
From: NeilBrown @ 2012-06-28 9:39 UTC (permalink / raw)
To: lists; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2311 bytes --]
On Thu, 28 Jun 2012 10:59:19 +0200 "Stefan G. Weichinger" <lists@xunil.at>
wrote:
> Am 28.06.2012 08:32, schrieb NeilBrown:
> > On Tue, 26 Jun 2012 15:57:21 +0200 "Stefan G. Weichinger"
> > <lists@xunil.at> wrote:
> >> Could that relate to this issue:
> >>
> >> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500309
> >
> > Nope. It was an earlier bug fixed in 2.6.5 by
> >
> > http://neil.brown.name/git?p=mdadm;a=commitdiff;h=7a3be72fc621b4a7589e923cf065
>
> Ah,
> >
> I see. So this *could* be solved by using a more recent mdadm,
> correct?
>
> >> The NAS seems to run some ubuntu:
> >>
> >> # cat /proc/version Linux version 2.6.33.2 (root@NasX86-5) (gcc
> >> version 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)) #1
> >> SMP Mon Sep 13 04:28:32 CST 2010
> >
> > Does the "X86" in there suggest and x86 processor? What does
> > "uname -a" show?
>
> # uname -a
> Linux NASCA0A00 2.6.33.2 #1 SMP Mon Sep 13 04:28:32 CST 2010 i686 unknown
>
> # cat /proc/cpuinfo
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 28
> model name : Intel(R) Atom(TM) CPU D525 @ 1.80GHz
> stepping : 10
> cpu MHz : 1795.730
> cache size : 512 KB
> [...]
>
> > You mean the NAS didn't come with a complete build environment and
> > sources for all programs? Outrageous.
>
> *sigh*
>
> > If you have a machine with the same arch at the NAS, you could
> >
> > git clone git://neil.brown.name/mdadm -b mdadm-2.6.5 cd mdadm make
> > mdadm.static CWFLAGS=-Wall
> >
> > and then use the "mdadm.static" on the NAS.
>
> Ok, thanks. I could boot some ubuntu on my Atom-netbook .. or what?
> Would it be enough to match the 64bit-environment, or would I have to
> use something with the same kernel ... ?
You just need a 32 bit build environment (not 64 bit). It doesn't matter
what the kernel is.
You might manage it with
make mdadm.static CWFLAGS=-m32 LDFLAGS=-m32
depending on what you have installed.
BTW, I got the command a bit wrong. This works.
git clone git://neil.brown.name/mdadm ; cd mdadm; git checkout mdadm-2.6.5 ;
make mdadm.static CWFLAGS=-Wall
I happen to have a 32bit machine, so I'll send you the resulting binary
in private email (don't want to clog up the list).
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?
2012-06-28 9:39 ` NeilBrown
@ 2012-06-28 9:42 ` Stefan G. Weichinger
0 siblings, 0 replies; 17+ messages in thread
From: Stefan G. Weichinger @ 2012-06-28 9:42 UTC (permalink / raw)
To: linux-raid
Am 28.06.2012 11:39, schrieb NeilBrown:
> You just need a 32 bit build environment (not 64 bit). It doesn't
> matter what the kernel is. You might manage it with
>
> make mdadm.static CWFLAGS=-m32 LDFLAGS=-m32
>
> depending on what you have installed.
>
> BTW, I got the command a bit wrong. This works.
>
> git clone git://neil.brown.name/mdadm ; cd mdadm; git checkout
> mdadm-2.6.5 ; make mdadm.static CWFLAGS=-Wall
>
> I happen to have a 32bit machine, so I'll send you the resulting
> binary in private email (don't want to clog up the list).
Thanks a lot for the offer, but as mentioned in my other mails, I
managed to build and use it already!
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2012-07-02 8:30 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-26 13:57 Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm? Stefan G. Weichinger
2012-06-27 10:17 ` Stefan G. Weichinger
2012-06-27 11:34 ` Stefan G. Weichinger
2012-06-27 11:38 ` Stefan G. Weichinger
2012-06-28 6:32 ` NeilBrown
2012-06-28 8:59 ` Stefan G. Weichinger
2012-06-28 9:14 ` Stefan G. Weichinger
2012-06-28 9:23 ` Stefan G. Weichinger
2012-06-28 11:22 ` NeilBrown
2012-06-28 15:56 ` Stefan G. Weichinger
2012-06-28 18:25 ` Stefan G. Weichinger
2012-06-28 21:36 ` NeilBrown
2012-06-29 8:18 ` Stefan G. Weichinger
2012-07-02 8:30 ` Stefan G. Weichinger
2012-06-28 21:39 ` NeilBrown
2012-06-28 9:39 ` NeilBrown
2012-06-28 9:42 ` Stefan G. Weichinger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).