From: Keith Busch <keith.busch@intel.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Christoph Hellwig <hch@lst.de>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path
Date: Tue, 2 Jan 2018 09:50:39 -0700 [thread overview]
Message-ID: <20180102165038.GA24386@localhost.localdomain> (raw)
In-Reply-To: <5ee695e8-f294-524a-e945-d484cea6df8a@grimberg.me>
On Sun, Dec 31, 2017 at 02:30:09PM +0200, Sagi Grimberg wrote:
> Not sure if stealing bios from requests is a better design. Note that
> we do exactly this in other transport (nvme_[rdma|loop|fc]_is_ready).
Well, we're currently failing requests that may succeed if we could
back them out for re-entry. While such scenarios are uncommon, I think
we can handle it better than ending them in failure.
> I think it would be better to stick to a coherent behavior across
> the nvme subsystem. If this condition statement is really something
> that is buying us measurable performance gain, I think we should apply
> it for other transports as well (although in fabrics we're a bit
> different because we have a dedicated connect that enters .queue_rq)
We should be able to remove all the state checks in the IO path because
the handlers putting the controller in an IO-incapable state should be
able to quiece the queues before transitioning to that state.
This is not a significant gain compared to the other two patches in this
series, but the sum of all the little things become meaningful.
We'd need to remove this check anyway if we're considering exclusively
polled queues since they wouldn't have a CQ vector.
next prev parent reply other threads:[~2018-01-02 16:50 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-21 20:46 [PATCH 0/3] Performance enhancements Keith Busch
2017-12-21 20:46 ` [PATCH 1/3] nvme/pci: Start request after doorbell ring Keith Busch
2017-12-21 20:49 ` Jens Axboe
2017-12-21 20:53 ` Jens Axboe
2017-12-21 21:02 ` Keith Busch
2017-12-21 21:01 ` Jens Axboe
2018-01-03 20:21 ` Keith Busch
2018-01-23 0:16 ` Keith Busch
2017-12-25 10:12 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-25 10:11 ` Sagi Grimberg
2017-12-26 20:35 ` Keith Busch
2017-12-27 9:02 ` Sagi Grimberg
2017-12-29 9:44 ` Christoph Hellwig
2017-12-21 20:46 ` [PATCH 2/3] nvme/pci: Remove cq_vector check in IO path Keith Busch
2017-12-21 20:54 ` Jens Axboe
2017-12-25 10:10 ` Sagi Grimberg
2017-12-27 21:01 ` Sagi Grimberg
2017-12-29 9:48 ` Christoph Hellwig
2017-12-29 15:39 ` Keith Busch
2017-12-31 12:30 ` Sagi Grimberg
2018-01-02 16:50 ` Keith Busch [this message]
2017-12-21 20:46 ` [PATCH 3/3] block: Polling completion performance optimization Keith Busch
2017-12-21 20:56 ` Scott Bauer
2017-12-21 21:00 ` Jens Axboe
2017-12-21 21:34 ` Keith Busch
2017-12-21 22:17 ` Jens Axboe
2017-12-21 23:10 ` Keith Busch
2017-12-22 15:40 ` Jens Axboe
2017-12-29 9:50 ` Christoph Hellwig
2017-12-29 15:51 ` Keith Busch
2017-12-31 12:48 ` Sagi Grimberg
2017-12-21 20:57 ` Jens Axboe
2017-12-29 9:51 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180102165038.GA24386@localhost.localdomain \
--to=keith.busch@intel.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).