From: Jens Axboe <jens.axboe@oracle.com>
To: Nick Piggin <npiggin@suse.de>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>,
linux-kernel@vger.kernel.org, arjan@linux.intel.com,
mingo@elte.hu, ak@suse.de, James.Bottomley@SteelEye.com,
andrea@suse.de, clameter@sgi.com, akpm@linux-foundation.org,
andrew.vasquez@qlogic.com, willy@linux.intel.com,
Zach Brown <zach.brown@oracle.com>
Subject: Re: [rfc] direct IO submission and completion scalability issues
Date: Mon, 4 Feb 2008 11:33:52 +0100 [thread overview]
Message-ID: <20080204103352.GD15220@kernel.dk> (raw)
In-Reply-To: <20080204103135.GB15210@wotan.suse.de>
On Mon, Feb 04 2008, Nick Piggin wrote:
> On Mon, Feb 04, 2008 at 11:12:44AM +0100, Jens Axboe wrote:
> > On Sun, Feb 03 2008, Nick Piggin wrote:
> > > On Fri, Jul 27, 2007 at 06:21:28PM -0700, Suresh B wrote:
> > >
> > > Hi guys,
> > >
> > > Just had another way we might do this. Migrate the completions out to
> > > the submitting CPUs rather than migrate submission into the completing
> > > CPU.
> > >
> > > I've got a basic patch that passes some stress testing. It seems fairly
> > > simple to do at the block layer, and the bulk of the patch involves
> > > introducing a scalable smp_call_function for it.
> > >
> > > Now it could be optimised more by looking at batching up IPIs or
> > > optimising the call function path or even mirating the completion event
> > > at a different level...
> > >
> > > However, this is a first cut. It actually seems like it might be taking
> > > slightly more CPU to process block IO (~0.2%)... however, this is on my
> > > dual core system that shares an llc, which means that there are very few
> > > cache benefits to the migration, but non-zero overhead. So on multisocket
> > > systems hopefully it might get to positive territory.
> >
> > That's pretty funny, I did pretty much the exact same thing last week!
>
> Oh nice ;)
>
>
> > The primary difference between yours and mine is that I used a more
> > private interface to signal a softirq raise on another CPU, instead of
> > allocating call data and exposing a generic interface. That put the
> > locking in blk-core instead, turning blk_cpu_done into a structure with
> > a lock and list_head instead of just being a list head, and intercepted
> > at blk_complete_request() time instead of waiting for an already raised
> > softirq on that CPU.
>
> Yeah I was looking at that... didn't really want to add the spinlock
> overhead to the non-migration case. Anyway, I guess that sort of
> fine implementation details is going to have to be sorted out with
> results.
As Andi mentions, we can look into making that lockless. For the initial
implementation I didn't really care, just wanted something to play with
that would nicely allow me to control both the submit and complete side
of the affinity issue.
--
Jens Axboe
next prev parent reply other threads:[~2008-02-04 10:34 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-07-28 1:21 [rfc] direct IO submission and completion scalability issues Siddha, Suresh B
2007-07-30 18:20 ` Christoph Lameter
2007-07-30 20:35 ` Siddha, Suresh B
2007-07-31 4:19 ` Nick Piggin
2007-07-31 17:14 ` Siddha, Suresh B
2007-08-01 0:41 ` Nick Piggin
2007-08-01 0:55 ` Siddha, Suresh B
2007-08-01 1:24 ` Nick Piggin
2008-02-03 9:52 ` Nick Piggin
2008-02-03 10:53 ` Pekka Enberg
2008-02-03 11:58 ` Nick Piggin
2008-02-04 2:10 ` David Chinner
2008-02-04 4:14 ` Arjan van de Ven
2008-02-04 4:40 ` David Chinner
2008-02-04 10:09 ` Nick Piggin
2008-02-05 0:14 ` David Chinner
2008-02-08 7:50 ` Nick Piggin
2008-02-04 18:21 ` Zach Brown
2008-02-04 20:10 ` Jens Axboe
2008-02-04 21:45 ` Arjan van de Ven
2008-02-05 8:24 ` Jens Axboe
2008-02-04 10:12 ` Jens Axboe
2008-02-04 10:31 ` Nick Piggin
2008-02-04 10:33 ` Jens Axboe [this message]
2008-02-04 22:28 ` James Bottomley
2008-02-04 10:30 ` Andi Kleen
2008-02-04 21:47 ` Siddha, Suresh B
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080204103352.GD15220@kernel.dk \
--to=jens.axboe@oracle.com \
--cc=James.Bottomley@SteelEye.com \
--cc=ak@suse.de \
--cc=akpm@linux-foundation.org \
--cc=andrea@suse.de \
--cc=andrew.vasquez@qlogic.com \
--cc=arjan@linux.intel.com \
--cc=clameter@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=npiggin@suse.de \
--cc=suresh.b.siddha@intel.com \
--cc=willy@linux.intel.com \
--cc=zach.brown@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox