From: Neil Brown <neilb@suse.de>
To: Dan Williams <dan.j.williams@gmail.com>
Cc: dean gaudet <dean@arctic.org>, linux-raid@vger.kernel.org
Subject: Re: 2.6.24-rc6 reproducible raid5 hang
Date: Fri, 11 Jan 2008 12:46:34 +1100 [thread overview]
Message-ID: <18310.51834.334017.700075@notabene.brown> (raw)
In-Reply-To: message from Dan Williams on Thursday January 10
On Thursday January 10, dan.j.williams@gmail.com wrote:
> On Jan 10, 2008 12:13 AM, dean gaudet <dean@arctic.org> wrote:
> > w.r.t. dan's cfq comments -- i really don't know the details, but does
> > this mean cfq will misattribute the IO to the wrong user/process? or is
> > it just a concern that CPU time will be spent on someone's IO? the latter
> > is fine to me... the former seems sucky because with today's multicore
> > systems CPU time seems cheap compared to IO.
> >
>
> I do not see this affecting the time slicing feature of cfq, because
> as Neil says the work has to get done at some point. If I give up
> some of my slice working on someone else's I/O chances are the favor
> will be returned in kind since the code does not discriminate. The
> io-priority capability of cfq currently does not work as advertised
> with current MD since the priority is tied to the current thread and
> the thread that actually submits the i/o on a stripe is
> non-deterministic. So I do not see this change making the situation
> any worse. In fact, it may make it a bit better since there is a
> higher chance for the thread submitting i/o to MD to do its own i/o to
> the backing disks.
>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Thanks.
But I suspect you didn't test it with a bitmap :-)
I ran the mdadm test suite and it hit a problem - easy enough to fix.
I'll look out for any other possible related problem (due to raid5d
running in different processes) and then submit it.
Thanks,
NeilBrown
next prev parent reply other threads:[~2008-01-11 1:46 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-12-27 17:06 2.6.24-rc6 reproducible raid5 hang dean gaudet
2007-12-27 17:39 ` dean gaudet
2007-12-29 16:48 ` dean gaudet
2007-12-29 20:47 ` Dan Williams
2007-12-29 20:58 ` dean gaudet
2007-12-29 21:50 ` Justin Piszcz
2007-12-29 22:11 ` dean gaudet
2007-12-29 22:21 ` dean gaudet
2007-12-29 22:06 ` Dan Williams
2007-12-30 17:58 ` dean gaudet
2008-01-09 18:28 ` Dan Williams
2008-01-10 0:09 ` Neil Brown
2008-01-10 3:07 ` Dan Williams
2008-01-10 3:57 ` Neil Brown
2008-01-10 4:56 ` Dan Williams
2008-01-10 20:28 ` Bill Davidsen
2008-01-10 7:13 ` dean gaudet
2008-01-10 18:49 ` Dan Williams
2008-01-11 1:46 ` Neil Brown [this message]
2008-01-11 2:14 ` dean gaudet
2008-01-10 17:59 ` dean gaudet
2007-12-27 19:52 ` Justin Piszcz
2007-12-28 0:08 ` dean gaudet
-- strict thread matches above, loose matches on Subject: below --
2008-01-23 13:37 Tim Southerwood
2008-01-23 17:43 ` Carlos Carvalho
2008-01-24 20:30 ` Tim Southerwood
2008-01-28 17:29 ` Tim Southerwood
2008-01-29 14:16 ` Carlos Carvalho
2008-01-29 22:58 ` Bill Davidsen
2008-02-14 10:13 ` Burkhard Carstens
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18310.51834.334017.700075@notabene.brown \
--to=neilb@suse.de \
--cc=dan.j.williams@gmail.com \
--cc=dean@arctic.org \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).