From: Ric Wheeler <rwheeler@redhat.com>
To: NeilBrown <neilb@suse.de>
Cc: Andrei Tanas <andrei@tanas.ca>, linux-kernel@vger.kernel.org
Subject: Re: MD/RAID: what's wrong with sector 1953519935?
Date: Tue, 25 Aug 2009 21:31:11 -0400 [thread overview]
Message-ID: <4A94905F.7050705@redhat.com> (raw)
In-Reply-To: <b585ed9f13649050bbc984869d081315.squirrel@neil.brown.name>
On 08/25/2009 09:24 PM, NeilBrown wrote:
> On Wed, August 26, 2009 11:06 am, Ric Wheeler wrote:
>> On 08/25/2009 08:50 PM, NeilBrown wrote:
>
>>> All 1TB drives are exactly the same size.
>>> If you create a single partition (e.g. sdb1) on such a device, and that
>>> partition starts at sector 63 (which is common), and create an md
>>> array using that partition, then the superblock will always be at the
>>> address you quote.
>>> The superblock is probably updated more often than any other block in
>>> the array, so there is probably an increased likelyhood of an error
>>> being reported against that sector.
>>>
>>> So it is not just a coincidence.
>>> Whether there is some deeper underlying problem though, I cannot say.
>>> Google only claims 68 matches for that number which doesn't seem
>>> big enough to be significant.
>>>
>>> NeilBrown
>>>
>>>
>>
>> Neil,
>>
>> One thing that can happen is when we have a hot spot (like the super
>> block) on high capacity drives is that the frequent write degrade the
>> data in adjacent tracks. Some drives have firmware that watches for
>> this and rewrites adjacent tracks, but it is also a good idea to avoid
>> too frequent updates.
>
> Yet another detail to worry about.... :-(
it never ends :-)
>
>>
>> Didn't you have a tunable to decrease this update frequency?
>
> /sys/block/mdX/md/safe_mode_delay
> is a time in seconds (Default 0.200) between when the last write to
> the array completes and when the superblock is marked as clean.
> Depending on the actual rate of writes to the array, the superblock
> can be updated as much as twice in this time (once to mark dirty,
> once to mark clean).
>
> Increasing the number can decrease the update frequency of the superblock,
> but the exact effect on update frequency is very load-dependant.
>
> Obviously a write-intent-bitmap, which is rarely more that a few
> sectors, can also see lots of updates, and it is harder to tune
> that (you have to set things up when you create the bitmap).
>
> NeilBrown
>
We did see issues in practice with adjacent sectors with some drives, so this
one is worth tuning down.
I would suggest that Andrei might try to write and clear the IO error at that
offset. You can use Mark Lord's hdparm to clear a specific sector or just do the
math (carefully!) and dd over it. It the write succeeds (without bumping your
remapped sectors count) this is a likely match to this problem,
ric
next prev parent reply other threads:[~2009-08-26 1:31 UTC|newest]
Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-26 0:32 MD/RAID: what's wrong with sector 1953519935? Andrei Tanas
2009-08-26 0:50 ` NeilBrown
2009-08-26 1:06 ` Ric Wheeler
2009-08-26 1:24 ` NeilBrown
2009-08-26 1:31 ` Ric Wheeler [this message]
2009-08-26 2:22 ` Andrei Tanas
2009-08-26 2:41 ` Ric Wheeler
2009-08-26 3:45 ` Andrei Tanas
2009-08-26 10:34 ` Ric Wheeler
2009-08-26 14:46 ` Andrei Tanas
2009-08-26 14:49 ` Andrei Tanas
2009-08-26 15:39 ` Ric Wheeler
2009-08-26 18:12 ` Andrei Tanas
2009-08-27 0:07 ` Mark Lord
2009-08-27 1:37 ` Andrei Tanas
2009-08-27 2:33 ` Robert Hancock
2009-08-27 21:22 ` MD/RAID time out writing superblock Andrei Tanas
2009-08-27 21:57 ` Ric Wheeler
2009-08-31 8:10 ` Tejun Heo
2009-08-31 12:04 ` Ric Wheeler
2009-08-31 12:20 ` Tejun Heo
2009-09-07 11:44 ` Chris Webb
2009-09-07 11:59 ` Chris Webb
2009-09-09 12:02 ` Chris Webb
2009-09-14 7:41 ` Tejun Heo
2009-09-14 7:44 ` Tejun Heo
2009-09-14 12:48 ` Mark Lord
2009-09-14 13:05 ` Tejun Heo
2009-09-14 14:25 ` Mark Lord
2009-09-16 23:19 ` Chris Webb
2009-09-17 13:29 ` Mark Lord
2009-09-17 13:32 ` Mark Lord
2009-09-17 13:37 ` Chris Webb
2009-09-17 15:35 ` Tejun Heo
2009-09-17 16:16 ` Mark Lord
2009-09-17 16:17 ` Mark Lord
2009-09-18 17:05 ` Chris Webb
2009-09-21 10:26 ` Chris Webb
2009-09-21 19:47 ` Mark Lord
2009-09-22 6:16 ` Robert Hancock
2009-09-20 18:36 ` Robert Hancock
2009-09-14 13:11 ` Henrique de Moraes Holschuh
2009-09-14 13:24 ` Tejun Heo
2009-09-14 14:02 ` Henrique de Moraes Holschuh
2009-09-14 14:34 ` Tejun Heo
2009-09-14 13:14 ` Gabor Gombas
2009-09-07 16:55 ` Allan Wind
2009-09-07 23:26 ` Thomas Fjellstrom
2009-09-14 7:46 ` Tejun Heo
2009-09-14 21:13 ` Thomas Fjellstrom
2009-09-14 22:23 ` Tejun Heo
2009-09-16 22:28 ` Chris Webb
2009-09-16 23:47 ` Tejun Heo
2009-09-17 0:34 ` Neil Brown
2009-09-17 12:00 ` Chris Webb
2009-09-17 11:57 ` Chris Webb
2009-09-17 15:44 ` Tejun Heo
2009-09-18 17:07 ` Chris Webb
2009-09-20 18:46 ` Robert Hancock
2009-09-21 0:02 ` Kyle Moffett
2009-09-17 13:35 ` Mark Lord
2009-09-17 15:47 ` Tejun Heo
2009-08-31 12:21 ` Mark Lord
2009-08-31 23:45 ` Mark Lord
2009-09-01 13:07 ` Andrei Tanas
2009-09-01 13:15 ` Mark Lord
2009-09-01 13:30 ` Tejun Heo
2009-09-01 13:47 ` Ric Wheeler
2009-09-01 14:18 ` Andrei Tanas
2009-09-14 5:30 ` Marc Giger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4A94905F.7050705@redhat.com \
--to=rwheeler@redhat.com \
--cc=andrei@tanas.ca \
--cc=linux-kernel@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox