* RAID-6 recovery questions
@ 2012-01-27 8:10 Mukund Sivaraman
2012-01-27 12:30 ` Mathias Burén
2012-01-27 17:56 ` Piergiorgio Sartor
0 siblings, 2 replies; 5+ messages in thread
From: Mukund Sivaraman @ 2012-01-27 8:10 UTC (permalink / raw)
To: linux-raid
Hi all
I am interested in building a 3TB * 8 disk RAID-6 array for
personal use. I am looking for info related to md recovery in the case
of failure.
I want to rely on the md RAID-6 array to some extent. It is a large
capacity array, and it is not financially feasible for me to have an
external mirror of it.
It seems that distributions like Fedora have a raid-check script for
periodic patrol read check, which is bound to reduce the risk of
surprise read errors during recovery.
In the unlikely case of 2 disk failure, a RAID-6 array loses redundancy,
but the array is still available.
When a disk reports errors reading blocks, it's likely that the rest of
the disk is readable, save for the bad blocks. In large capacity disks
available today, bad blocks are very common (as SMART output on year-old
disks will show). (Rewriting these bad blocks should make the disk remap
them and make them available again.)
My questions:
1. During recovery after 1-disk failure, what happens if there are
read errors on more than one disk? From what I've understood it seems
that if there is a read error during recovery, the entire disk is marked
as failed. It's very probable that these bad blocks are in different
places on different disks. Is it mathematically possible (RAID-6) to
recover such an array completely (rewriting the bad blocks with data
from other disks)? What options are available to recover from such a
situation?
Figure:
+---+---+---+---+---+---+---+---+
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | RAID-6 array
+---+---+---+---+---+---+---+---+
^ ^ ^ \---------v-------/
| | | ok
dead | |
| +- partial read errors
+----- partial read errors
{read_error_blocks(3)} intersect {read_error_blocks(2)} = NUL.
2. During recovery after 2-disk failure, what happens if there are
read errors? Is it possible to overwrite the bad blocks with zeros (so
they are remapped and don't error anymore) and force them back into the
same array configuration so that most of the filesystem can be recovered
(except for the data in the bad blocks) ?
3. What is the raid6check(8) tool useful for, which checks non-degraded
arrays? Doesn't "echo check > /sys/block/$dev/md/sync_action" do the
same?
Kind regards,
Mukund
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID-6 recovery questions
2012-01-27 8:10 RAID-6 recovery questions Mukund Sivaraman
@ 2012-01-27 12:30 ` Mathias Burén
2012-01-29 13:06 ` Mukund Sivaraman
2012-01-27 17:56 ` Piergiorgio Sartor
1 sibling, 1 reply; 5+ messages in thread
From: Mathias Burén @ 2012-01-27 12:30 UTC (permalink / raw)
To: Mukund Sivaraman; +Cc: linux-raid
On 27 January 2012 08:10, Mukund Sivaraman <muks@banu.com> wrote:
> Hi all
>
> I am interested in building a 3TB * 8 disk RAID-6 array for
> personal use. I am looking for info related to md recovery in the case
> of failure.
>
> I want to rely on the md RAID-6 array to some extent. It is a large
> capacity array, and it is not financially feasible for me to have an
> external mirror of it.
>
> It seems that distributions like Fedora have a raid-check script for
> periodic patrol read check, which is bound to reduce the risk of
> surprise read errors during recovery.
>
> In the unlikely case of 2 disk failure, a RAID-6 array loses redundancy,
> but the array is still available.
>
> When a disk reports errors reading blocks, it's likely that the rest of
> the disk is readable, save for the bad blocks. In large capacity disks
> available today, bad blocks are very common (as SMART output on year-old
> disks will show). (Rewriting these bad blocks should make the disk remap
> them and make them available again.)
>
> My questions:
>
> 1. During recovery after 1-disk failure, what happens if there are
> read errors on more than one disk? From what I've understood it seems
> that if there is a read error during recovery, the entire disk is marked
> as failed. It's very probable that these bad blocks are in different
> places on different disks. Is it mathematically possible (RAID-6) to
> recover such an array completely (rewriting the bad blocks with data
> from other disks)? What options are available to recover from such a
> situation?
>
> Figure:
>
> +---+---+---+---+---+---+---+---+
> | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | RAID-6 array
> +---+---+---+---+---+---+---+---+
> ^ ^ ^ \---------v-------/
> | | | ok
> dead | |
> | +- partial read errors
> +----- partial read errors
>
> {read_error_blocks(3)} intersect {read_error_blocks(2)} = NUL.
>
>
> 2. During recovery after 2-disk failure, what happens if there are
> read errors? Is it possible to overwrite the bad blocks with zeros (so
> they are remapped and don't error anymore) and force them back into the
> same array configuration so that most of the filesystem can be recovered
> (except for the data in the bad blocks) ?
>
> 3. What is the raid6check(8) tool useful for, which checks non-degraded
> arrays? Doesn't "echo check > /sys/block/$dev/md/sync_action" do the
> same?
>
>
> Kind regards,
>
> Mukund
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi,
(this caters to question 1 & 2)
I run a 7x 2TB array myself, which has had 2 drive failure the last ~2
years (the age of the array). The first time went smooth, MD kicked
out the bad drive almost immediately, and when I got the new drive it
rebuilt happily and everything was fine. The other time (see the last
few days on this mailing list) the drive wasn't kicked out straight
away - it kept retrying to read/write those bad sectors, and stalling
the entire system. I was unable to sync or anything, so I had to pull
the power, pull the bad disk, then power on the system again and
perform a RAID6 check (which turned out fine).
Lesson learned, you'll want a drive to get kicked out ASAP if it's
reporting errors. When it's kicked out, you can test it yourself while
your RAID6 still chugs along, and if you can fix the errors, stick it
back into the array. If not, RMA the drive.
3: I dunno.
Kind regards,
Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID-6 recovery questions
2012-01-27 8:10 RAID-6 recovery questions Mukund Sivaraman
2012-01-27 12:30 ` Mathias Burén
@ 2012-01-27 17:56 ` Piergiorgio Sartor
2012-01-29 13:07 ` Mukund Sivaraman
1 sibling, 1 reply; 5+ messages in thread
From: Piergiorgio Sartor @ 2012-01-27 17:56 UTC (permalink / raw)
To: Mukund Sivaraman; +Cc: linux-raid
On Fri, Jan 27, 2012 at 01:40:25PM +0530, Mukund Sivaraman wrote:
> Hi all
>
> I am interested in building a 3TB * 8 disk RAID-6 array for
> personal use. I am looking for info related to md recovery in the case
> of failure.
>
> I want to rely on the md RAID-6 array to some extent. It is a large
> capacity array, and it is not financially feasible for me to have an
> external mirror of it.
>
> It seems that distributions like Fedora have a raid-check script for
> periodic patrol read check, which is bound to reduce the risk of
> surprise read errors during recovery.
>
> In the unlikely case of 2 disk failure, a RAID-6 array loses redundancy,
> but the array is still available.
>
> When a disk reports errors reading blocks, it's likely that the rest of
> the disk is readable, save for the bad blocks. In large capacity disks
> available today, bad blocks are very common (as SMART output on year-old
> disks will show). (Rewriting these bad blocks should make the disk remap
> them and make them available again.)
>
> My questions:
>
> 1. During recovery after 1-disk failure, what happens if there are
> read errors on more than one disk? From what I've understood it seems
> that if there is a read error during recovery, the entire disk is marked
> as failed. It's very probable that these bad blocks are in different
> places on different disks. Is it mathematically possible (RAID-6) to
> recover such an array completely (rewriting the bad blocks with data
> from other disks)? What options are available to recover from such a
> situation?
>
> Figure:
>
> +---+---+---+---+---+---+---+---+
> | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | RAID-6 array
> +---+---+---+---+---+---+---+---+
> ^ ^ ^ \---------v-------/
> | | | ok
> dead | |
> | +- partial read errors
> +----- partial read errors
>
> {read_error_blocks(3)} intersect {read_error_blocks(2)} = NUL.
>
>
> 2. During recovery after 2-disk failure, what happens if there are
> read errors? Is it possible to overwrite the bad blocks with zeros (so
> they are remapped and don't error anymore) and force them back into the
> same array configuration so that most of the filesystem can be recovered
> (except for the data in the bad blocks) ?
>
> 3. What is the raid6check(8) tool useful for, which checks non-degraded
> arrays? Doesn't "echo check > /sys/block/$dev/md/sync_action" do the
> same?
The check using the sysfs will tell you *only* how many
blocks have a mismatch, that is, how many times the parity
is not correct.
The "raid6check" will tell you which stripe have mismatches
and, if possible, which HDD is responsible for the mismatches.
It can happen the parity is fine, but one HDD has problems.
This is done exploiting the RAID6 double parity and, of
course, can work only on non-degraded arrays.
Hope this helps,
bye,
pg
>
>
> Kind regards,
>
> Mukund
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
piergiorgio
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID-6 recovery questions
2012-01-27 12:30 ` Mathias Burén
@ 2012-01-29 13:06 ` Mukund Sivaraman
0 siblings, 0 replies; 5+ messages in thread
From: Mukund Sivaraman @ 2012-01-29 13:06 UTC (permalink / raw)
To: Mathias Burén; +Cc: linux-raid
Hi Mathias
On Fri, Jan 27, 2012 at 12:30:36PM +0000, Mathias Burén wrote:
[snip]
> Hi,
>
> (this caters to question 1 & 2)
> I run a 7x 2TB array myself, which has had 2 drive failure the last ~2
> years (the age of the array). The first time went smooth, MD kicked
> out the bad drive almost immediately, and when I got the new drive it
> rebuilt happily and everything was fine. The other time (see the last
> few days on this mailing list) the drive wasn't kicked out straight
> away - it kept retrying to read/write those bad sectors, and stalling
> the entire system. I was unable to sync or anything, so I had to pull
> the power, pull the bad disk, then power on the system again and
> perform a RAID6 check (which turned out fine).
>
> Lesson learned, you'll want a drive to get kicked out ASAP if it's
> reporting errors. When it's kicked out, you can test it yourself while
> your RAID6 still chugs along, and if you can fix the errors, stick it
> back into the array. If not, RMA the drive.
Thank you for replying (you're the only one who has replied to these
two questions). But it doesn't actually answer the question I have.
The problem is that a disk has failed fully, and during rebuild, >1
disks among the remaining members are reporting read errors which were
not known previously -> latent errors (see the figure in the email
which started this topic). This is not an unlikely situation with
RAID-6 and modern "end-user" SATA disks.
How do we then go about fully or partially recovering data in the
array?
Kind regards,
Mukund
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: RAID-6 recovery questions
2012-01-27 17:56 ` Piergiorgio Sartor
@ 2012-01-29 13:07 ` Mukund Sivaraman
0 siblings, 0 replies; 5+ messages in thread
From: Mukund Sivaraman @ 2012-01-29 13:07 UTC (permalink / raw)
To: Piergiorgio Sartor; +Cc: linux-raid
Hi Piergiorgio
On Fri, Jan 27, 2012 at 06:56:18PM +0100, Piergiorgio Sartor wrote:
> The check using the sysfs will tell you *only* how many
> blocks have a mismatch, that is, how many times the parity
> is not correct.
>
> The "raid6check" will tell you which stripe have mismatches
> and, if possible, which HDD is responsible for the mismatches.
> It can happen the parity is fine, but one HDD has problems.
> This is done exploiting the RAID6 double parity and, of
> course, can work only on non-degraded arrays.
Thank you for answering. :)
Kind regards,
Mukund
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-01-29 13:07 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-27 8:10 RAID-6 recovery questions Mukund Sivaraman
2012-01-27 12:30 ` Mathias Burén
2012-01-29 13:06 ` Mukund Sivaraman
2012-01-27 17:56 ` Piergiorgio Sartor
2012-01-29 13:07 ` Mukund Sivaraman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox