From: Sagi Grimberg <sagi@grimberg.me>
To: Christoph Hellwig <hch@lst.de>, Keith Busch <keith.busch@intel.com>
Cc: Jens Axboe <axboe@kernel.dk>, Scott Bauer <sbauer@eng.utah.edu>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 3/3] block: Polling completion performance optimization
Date: Sun, 31 Dec 2017 14:48:01 +0200 [thread overview]
Message-ID: <b00a03f2-dbc7-fb4f-393e-1496861dbf28@grimberg.me> (raw)
In-Reply-To: <20171229095021.GD24043@lst.de>
On 12/29/2017 11:50 AM, Christoph Hellwig wrote:
> On Thu, Dec 21, 2017 at 02:34:00PM -0700, Keith Busch wrote:
>> It would be nice, but the driver doesn't know a request's completion
>> is going to be a polled.
>
> We can trivially set a REQ_POLL bit. In fact my initial patch kit had
> those on the insitence of Jens, but then I removed it because we had
> no users for it.
>
>> Even if it did, we don't have a spec defined
>> way to tell the controller not to send an interrupt with this command's
>> compeletion, which would be negated anyway if any interrupt driven IO
>> is mixed in the same queue.
>
> Time to add such a flag to the spec then..
That would be very useful, ideally we can also hook it into libaio
to submit without triggering an interrupt and have io_getevents to poll
the underlying bdev (blk_poll) similar to how the net stack implements
low latency sockets [1].
Having the ability to suppress interrupts and poll for completions would
be very useful for file servers or targets living in userspace.
https://lwn.net/Articles/551284/
next prev parent reply other threads:[~2017-12-31 12:48 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-21 20:46 [PATCH 0/3] Performance enhancements Keith Busch
2017-12-21 20:46 ` [PATCH 1/3] nvme/pci: Start request after doorbell ring Keith Busch
2017-12-21 20:49 ` Jens Axboe
2017-12-21 20:53 ` Jens Axboe
2017-12-21 21:02 ` Keith Busch
2017-12-21 21:01 ` Jens Axboe
2018-01-03 20:21 ` Keith Busch
2018-01-23 0:16 ` Keith Busch
2017-12-25 10:12 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-25 10:11 ` Sagi Grimberg
2017-12-26 20:35 ` Keith Busch
2017-12-27 9:02 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-21 20:46 ` [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path Keith Busch
2017-12-21 20:54 ` Jens Axboe
2017-12-25 10:10 ` Sagi Grimberg
2017-12-27 21:01 ` Sagi Grimberg
2017-12-29 9:48 ` Christoph Hellwig
2017-12-29 15:39 ` Keith Busch
2017-12-31 12:30 ` Sagi Grimberg
2018-01-02 16:50 ` Keith Busch
2017-12-21 20:46 ` [PATCH 3/3] block: Polling completion performance optimization Keith Busch
2017-12-21 20:56 ` Scott Bauer
2017-12-21 21:00 ` Jens Axboe
2017-12-21 21:34 ` Keith Busch
2017-12-21 22:17 ` Jens Axboe
2017-12-21 23:10 ` Keith Busch
2017-12-22 15:40 ` Jens Axboe
2017-12-29 9:50 ` Christoph Hellwig
2017-12-29 15:51 ` Keith Busch
2017-12-31 12:48 ` Sagi Grimberg [this message]
2017-12-21 20:57 ` Jens Axboe
2017-12-29 9:51 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b00a03f2-dbc7-fb4f-393e-1496861dbf28@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sbauer@eng.utah.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).