From: Bill Davidsen <davidsen@tmr.com>
To: Neil Brown <neilb@suse.de>
Cc: Dan Williams <dan.j.williams@intel.com>,
dean gaudet <dean@arctic.org>,
linux-raid@vger.kernel.org
Subject: Re: 2.6.24-rc6 reproducible raid5 hang
Date: Thu, 10 Jan 2008 15:28:30 -0500 [thread overview]
Message-ID: <47867FEE.8070704@tmr.com> (raw)
In-Reply-To: <18309.38832.518580.235804@notabene.brown>
Neil Brown wrote:
> On Wednesday January 9, dan.j.williams@intel.com wrote:
>
>> On Jan 9, 2008 5:09 PM, Neil Brown <neilb@suse.de> wrote:
>>
>>> On Wednesday January 9, dan.j.williams@intel.com wrote:
>>>
>>> Can you test it please?
>>>
>> This passes my failure case.
>>
>
> Thanks!
>
>
>>> Does it seem reasonable?
>>>
>> What do you think about limiting the number of stripes the submitting
>> thread handles to be equal to what it submitted? If I'm a stripe that
>> only submits 1 stripe worth of work should I get stuck handling the
>> rest of the cache?
>>
>
> Dunno....
> Someone has to do the work, and leaving it all to raid5d means that it
> all gets done on one CPU.
> I expect that most of the time the queue of ready stripes is empty so
> make_request will mostly only handle it's own stripes anyway.
> The times that it handles other thread's stripes will probably balance
> out with the times that other threads handle this threads stripes.
>
> So I'm incline to leave it as "do as much work as is available to be
> done" as that is simplest. But I can probably be talked out of it
> with a convincing argument....
>
How about "it will perform better (defined as faster) during conditions
of unusual i/o activity?" Is that a convincing argument to use your
solution as offered? How about "complexity and maintainability are a
zero-sum problem?"
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
next prev parent reply other threads:[~2008-01-10 20:28 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-12-27 17:06 2.6.24-rc6 reproducible raid5 hang dean gaudet
2007-12-27 17:39 ` dean gaudet
2007-12-29 16:48 ` dean gaudet
2007-12-29 20:47 ` Dan Williams
2007-12-29 20:58 ` dean gaudet
2007-12-29 21:50 ` Justin Piszcz
2007-12-29 22:11 ` dean gaudet
2007-12-29 22:21 ` dean gaudet
2007-12-29 22:06 ` Dan Williams
2007-12-30 17:58 ` dean gaudet
2008-01-09 18:28 ` Dan Williams
2008-01-10 0:09 ` Neil Brown
2008-01-10 3:07 ` Dan Williams
2008-01-10 3:57 ` Neil Brown
2008-01-10 4:56 ` Dan Williams
2008-01-10 20:28 ` Bill Davidsen [this message]
2008-01-10 7:13 ` dean gaudet
2008-01-10 18:49 ` Dan Williams
2008-01-11 1:46 ` Neil Brown
2008-01-11 2:14 ` dean gaudet
2008-01-10 17:59 ` dean gaudet
2007-12-27 19:52 ` Justin Piszcz
2007-12-28 0:08 ` dean gaudet
-- strict thread matches above, loose matches on Subject: below --
2008-01-23 13:37 Tim Southerwood
2008-01-23 17:43 ` Carlos Carvalho
2008-01-24 20:30 ` Tim Southerwood
2008-01-28 17:29 ` Tim Southerwood
2008-01-29 14:16 ` Carlos Carvalho
2008-01-29 22:58 ` Bill Davidsen
2008-02-14 10:13 ` Burkhard Carstens
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47867FEE.8070704@tmr.com \
--to=davidsen@tmr.com \
--cc=dan.j.williams@intel.com \
--cc=dean@arctic.org \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).