From: David Greaves <david@dgreaves.com>
To: Dan Williams <dan.j.williams@gmail.com>
Cc: Neil Brown <neilb@suse.de>, linux-raid@vger.kernel.org
Subject: Re: sync_action repair not reading all sectors?
Date: Wed, 18 Mar 2009 12:23:02 +0000 [thread overview]
Message-ID: <49C0E7A6.3070502@dgreaves.com> (raw)
In-Reply-To: <e9c3a7c20903171446w122ef4f5ibb127b9cbec3c636@mail.gmail.com>
Dan Williams wrote:
> On Mon, Mar 16, 2009 at 4:27 AM, David Greaves <david@dgreaves.com> wrote:
>> I have a drive that has bad sectors. Lots of them.
>>
>> smartctl shows
>> # 1 Short offline Completed: read failure 20% 530
>> 1953520877
>>
>> A simple ddrescue to this part of the disk gets this:
>>
>> Mar 16 10:41:28 elm kernel: [ 8643.123397] sd 3:0:0:0: [sdd] 1953525168 512-byte
>> hardware sectors (1000205 MB)
>> <snip<>51/40:00:f0:5c:70/00:00:74:00:00/e0 Emask 0x9 (media error)
>> Mar 16 10:41:29 elm kernel: [ 8644.190060] ata4.00: status: { DRDY ERR }
>> Mar 16 10:41:29 elm kernel: [ 8644.190099] ata4.00: error: { UNC }
>>
>> and reports 30 or so errors.
>>
>>
>> mdstat tells me:
>> md0 : active raid5 sdd1[0] sdb1[2] sda1[1]
>> 1953519872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
>>
>> So sdd1 is in there.
>>
>> /dev/sdd1 is the full disk
>>
>
> Are you sure? Maybe I did the following math wrong, but it seems
> there is a chance this bad region is outside the raid array.
> /proc/mdstat says the array is 1953519872 blocks large which is
> 3907039744 sectors. For a three disk raid5 that means we are using
> 1953519872 sectors per disk. The failing sector of 1953520877 is 1005
> sectors outside the array, probably 942 assuming partition 1 starts at
> sector 63??
>
> --
> Dan
Thanks for taking the time to look and for spotting this Dan.
Well you are right. The media error is occurring outside the partition.
But equally: yes, it's the full disk according to cfdisk,fdisk
I *knew* that I'd allocated the full disk to the partition and checked at a
cursory level but not at a sector level :(
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
/dev/sdd1 1 121601 976760001 83 Linux
1 Primary 0 1953520064 63 1953520065 Linux (83) None
but kernel.log says:
sd 3:0:0:0: [sdd] 1953525168 512-byte hardware sectors (1000205 MB)
So I humbly apologise for doubting md :)
Pragmatically it looks like a genuine disk error but I should be OK to recover
by stopping the array and doing a fast ddrescue mirror on this device rather
than a more risky replace/resync now the advance replacement has arrived.
Shame we can't do that without stopping the array yet ;)
David
--
"Don't worry, you'll be fine; I saw it work in a cartoon once..."
prev parent reply other threads:[~2009-03-18 12:23 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-16 11:27 sync_action repair not reading all sectors? David Greaves
2009-03-16 15:04 ` David Lethe
2009-03-16 15:20 ` Greg Freemyer
2009-03-17 10:49 ` David Greaves
2009-03-17 21:46 ` Dan Williams
2009-03-18 12:23 ` David Greaves [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49C0E7A6.3070502@dgreaves.com \
--to=david@dgreaves.com \
--cc=dan.j.williams@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).