From: Keith Busch <keith.busch@intel.com>
To: Jens Axboe <axboe@kernel.dk>
Cc: Scott Bauer <sbauer@eng.utah.edu>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH 3/3] block: Polling completion performance optimization
Date: Thu, 21 Dec 2017 16:10:55 -0700 [thread overview]
Message-ID: <20171221231054.GA2875@localhost.localdomain> (raw)
In-Reply-To: <5d3f06bf-8cb8-048d-30c5-9af611ad9ee7@kernel.dk>
On Thu, Dec 21, 2017 at 03:17:41PM -0700, Jens Axboe wrote:
> On 12/21/17 2:34 PM, Keith Busch wrote:
> > It would be nice, but the driver doesn't know a request's completion
> > is going to be a polled.
>
> That's trivially solvable though, since the information is available
> at submission time.
>
> > Even if it did, we don't have a spec defined
> > way to tell the controller not to send an interrupt with this command's
> > compeletion, which would be negated anyway if any interrupt driven IO
> > is mixed in the same queue. We could possibly create a special queue
> > with interrupts disabled for this purpose if we can pass the HIPRI hint
> > through the request.
>
> There's on way to do it per IO, right. But you can create a sq/cq pair
> without interrupts enabled. This would also allow you to scale better
> with multiple users of polling, a case where we currently don't
> perform as well spdk, for instance.
Would you be open to have blk-mq provide special hi-pri hardware contexts
for all these requests to come through? Maybe one per NUMA node? If not,
I don't think have enough unused bits in the NVMe command id to stash
the hctx id to extract the original request.
next prev parent reply other threads:[~2017-12-21 23:10 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-21 20:46 [PATCH 0/3] Performance enhancements Keith Busch
2017-12-21 20:46 ` [PATCH 1/3] nvme/pci: Start request after doorbell ring Keith Busch
2017-12-21 20:49 ` Jens Axboe
2017-12-21 20:53 ` Jens Axboe
2017-12-21 21:02 ` Keith Busch
2017-12-21 21:01 ` Jens Axboe
2018-01-03 20:21 ` Keith Busch
2018-01-23 0:16 ` Keith Busch
2017-12-25 10:12 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-25 10:11 ` Sagi Grimberg
2017-12-26 20:35 ` Keith Busch
2017-12-27 9:02 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-21 20:46 ` [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path Keith Busch
2017-12-21 20:54 ` Jens Axboe
2017-12-25 10:10 ` Sagi Grimberg
2017-12-27 21:01 ` Sagi Grimberg
2017-12-29 9:48 ` Christoph Hellwig
2017-12-29 15:39 ` Keith Busch
2017-12-31 12:30 ` Sagi Grimberg
2018-01-02 16:50 ` Keith Busch
2017-12-21 20:46 ` [PATCH 3/3] block: Polling completion performance optimization Keith Busch
2017-12-21 20:56 ` Scott Bauer
2017-12-21 21:00 ` Jens Axboe
2017-12-21 21:34 ` Keith Busch
2017-12-21 22:17 ` Jens Axboe
2017-12-21 23:10 ` Keith Busch [this message]
2017-12-22 15:40 ` Jens Axboe
2017-12-29 9:50 ` Christoph Hellwig
2017-12-29 15:51 ` Keith Busch
2017-12-31 12:48 ` Sagi Grimberg
2017-12-21 20:57 ` Jens Axboe
2017-12-29 9:51 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171221231054.GA2875@localhost.localdomain \
--to=keith.busch@intel.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=sbauer@eng.utah.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).