From: Jens Axboe <axboe@kernel.dk>
To: Anuj gupta <anuj1072538@gmail.com>
Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org
Subject: Re: [PATCHSET v2 0/5] Enable alloc caching and batched freeing for passthrough
Date: Wed, 28 Sep 2022 08:22:20 -0600 [thread overview]
Message-ID: <45fef5c6-8945-a140-a3ce-34bb4b287dc4@kernel.dk> (raw)
In-Reply-To: <CACzX3AumYMDVPwvRYpMi6vvcPTzR0W0bUT1-545HvArpH+7Uwg@mail.gmail.com>
On 9/28/22 7:23 AM, Anuj gupta wrote:
> On Tue, Sep 27, 2022 at 7:14 AM Jens Axboe <axboe@kernel.dk> wrote:
>>
>> Hi,
>>
>> The passthrough IO path currently doesn't do any request allocation
>> batching like we do for normal IO. Wire this up through the usual
>> blk_mq_alloc_request() allocation helper.
>>
>> Similarly, we don't currently supported batched completions for
>> passthrough IO. Allow the request->end_io() handler to return back
>> whether or not it retains ownership of the request. By default all
>> handlers are converted to returning RQ_END_IO_NONE, which retains
>> the existing behavior. But with that in place, we can tweak the
>> nvme uring_cmd end_io handler to pass back ownership, and hence enable
>> completion batching for passthrough requests as well.
>>
>> This is good for a 10% improvement for passthrough performance. For
>> a non-drive limited test case, passthrough IO is now more efficient
>> than the regular bdev O_DIRECT path.
>>
>> Changes since v1:
>> - Remove spurious semicolon
>> - Cleanup struct nvme_uring_cmd_pdu handling
>>
>> --
>> Jens Axboe
>>
>>
> I see an improvement of ~12% (2.34 to 2.63 MIOPS) with polling enabled and
> an improvement of ~4% (1.84 to 1.92 MIOPS) with polling disabled using the
> t/io_uring utility (in fio) in my setup with this patch series!
Thanks for your testing! I'll add your reviewed-by to the series.
--
Jens Axboe
next prev parent reply other threads:[~2022-09-28 14:22 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-27 1:44 [PATCHSET v2 0/5] Enable alloc caching and batched freeing for passthrough Jens Axboe
2022-09-27 1:44 ` [PATCH 1/5] block: enable batched allocation for blk_mq_alloc_request() Jens Axboe
2022-09-28 13:38 ` Anuj gupta
2022-09-27 1:44 ` [PATCH 2/5] block: change request end_io handler to pass back a return value Jens Axboe
2022-09-27 1:44 ` [PATCH 3/5] block: allow end_io based requests in the completion batch handling Jens Axboe
2022-09-28 13:42 ` Anuj gupta
2022-09-27 1:44 ` [PATCH 4/5] nvme: split out metadata vs non metadata end_io uring_cmd completions Jens Axboe
2022-09-27 7:50 ` Christoph Hellwig
2022-09-28 13:51 ` Anuj gupta
2022-09-28 14:47 ` Sagi Grimberg
2022-09-27 1:44 ` [PATCH 5/5] nvme: enable batched completions of passthrough IO Jens Axboe
2022-09-28 13:55 ` Anuj gupta
2022-09-28 14:47 ` Sagi Grimberg
2022-09-28 13:23 ` [PATCHSET v2 0/5] Enable alloc caching and batched freeing for passthrough Anuj gupta
2022-09-28 14:22 ` Jens Axboe [this message]
2022-09-28 17:05 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=45fef5c6-8945-a140-a3ce-34bb4b287dc4@kernel.dk \
--to=axboe@kernel.dk \
--cc=anuj1072538@gmail.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox