From: Andrew Morton <akpm@osdl.org>
To: Ingo Molnar <mingo@elte.hu>
Cc: axboe@suse.de, linux-kernel@vger.kernel.org
Subject: Re: [patch] max-sectors-2.6.9-rc1-bk14-A0
Date: Wed, 8 Sep 2004 04:43:28 -0700 [thread overview]
Message-ID: <20040908044328.46eec88b.akpm@osdl.org> (raw)
In-Reply-To: <20040908104931.GA5523@elte.hu>
Ingo Molnar <mingo@elte.hu> wrote:
>
> * Andrew Morton <akpm@osdl.org> wrote:
>
> > > the attached patch introduces two new /sys/block values:
> > >
> > > /sys/block/*/queue/max_hw_sectors_kb
> > > /sys/block/*/queue/max_sectors_kb
> > >
> > > max_hw_sectors_kb is the maximum that the driver can handle and is
> > > readonly. max_sectors_kb is the current max_sectors value and can be
> > > tuned by root. PAGE_SIZE granularity is enforced.
> > >
> > > It's all locking-safe and all affected layered drivers have been updated
> > > as well. The patch has been in testing for a couple of weeks already as
> > > part of the voluntary-preempt patches and it works just fine - people
> > > use it to reduce IDE IRQ handling latencies.
> >
> > Could you remind us what the cause of the latency is, and its
> > duration?
> >
> > (Am vaguely surprised that it's an issue at, what, 32 pages? Is
> > something sucky happening?)
>
> yes, we are touching and completing 32 (or 64?) completely cache-cold
> structures: the page and the bio which are on two separate cachelines a
> pop. We also call into the mempool code for every bio completed. With
> the default max_sectors people reported hardirq latencies up to 1msec or
> more. You can see a trace of a 600+usec latency at:
>
> http://krustophenia.net/testresults.php?dataset=2.6.8-rc4-bk3-O7#/var/www/2.6.8-rc4-bk3-O7/ide_irq_latency_trace.txt
>
> here it's ~8 usecs per page completion - with 64 pages this completion
> activity alone is 512 usecs. So people want to have a way to tune down
> the maximum overhead in hardirq handlers. Users of the VP patches have
> reported good results (== no significant performance impact) with
> max_sectors at 32KB (8 pages) or even 16KB (4 pages).
Still sounds a bit odd. How many cachelines can that CPU fetch in 8 usecs?
Several tens at least?
next prev parent reply other threads:[~2004-09-08 11:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-09-08 10:04 [patch] max-sectors-2.6.9-rc1-bk14-A0 Ingo Molnar
2004-09-08 10:09 ` Andrew Morton
2004-09-08 10:49 ` Ingo Molnar
2004-09-08 11:43 ` Andrew Morton [this message]
2004-09-08 12:38 ` Ingo Molnar
2004-09-08 10:17 ` Jens Axboe
2004-09-08 10:54 ` Ingo Molnar
2004-09-08 11:05 ` Jens Axboe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040908044328.46eec88b.akpm@osdl.org \
--to=akpm@osdl.org \
--cc=axboe@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.