From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>,
Kanchan Joshi <joshi.k@samsung.com>,
Leon Romanovsky <leon@kernel.org>,
Nitesh Shetty <nj.shetty@samsung.com>,
Logan Gunthorpe <logang@deltatee.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: new DMA API conversion for nvme-pci v3
Date: Thu, 26 Jun 2025 15:33:24 +0000 [thread overview]
Message-ID: <bcdcb5eb-17ed-412f-bf5c-303079798fe2@nvidia.com> (raw)
In-Reply-To: <20250625113531.522027-1-hch@lst.de>
On 6/25/25 04:34, Christoph Hellwig wrote:
> Hi all,
>
> this series converts the nvme-pci driver to the new IOVA-based DMA API
> for the data path.
>
> Chances since v2:
> - fix handling of sgl_threshold=0
>
> Chances since v1:
> - minor cleanups to the block dma mapping helpers
> - fix the metadata SGL supported check for bisectability
> - fix SGL threshold check
> - fix/simplify metadata SGL force checks
I was able to finish the testing of this series.
I don't see any obvious issues so far, looks good.
I've also collected the multiple sets of the performance numbers
for different block sizes using io_uring and libaio ioengines with fio
and don't see any significant performance difference on the setup I've.
If you want I can share the whole log but don't want to spam the list.
Tested-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
-ck
prev parent reply other threads:[~2025-06-26 15:33 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-25 11:34 new DMA API conversion for nvme-pci v3 Christoph Hellwig
2025-06-25 11:34 ` [PATCH 1/8] block: don't merge different kinds of P2P transfers in a single bio Christoph Hellwig
2025-06-25 11:34 ` [PATCH 2/8] block: add scatterlist-less DMA mapping helpers Christoph Hellwig
2025-06-25 11:35 ` [PATCH 3/8] nvme-pci: refactor nvme_pci_use_sgls Christoph Hellwig
2025-06-25 16:03 ` Keith Busch
2025-06-25 11:35 ` [PATCH 4/8] nvme-pci: merge the simple PRP and SGL setup into a common helper Christoph Hellwig
2025-06-25 11:35 ` [PATCH 5/8] nvme-pci: remove superfluous arguments Christoph Hellwig
2025-06-25 11:35 ` [PATCH 6/8] nvme-pci: convert the data mapping to blk_rq_dma_map Christoph Hellwig
2025-06-25 11:35 ` [PATCH 7/8] nvme-pci: replace NVME_MAX_KB_SZ with NVME_MAX_BYTE Christoph Hellwig
2025-06-25 11:35 ` [PATCH 8/8] nvme-pci: rework the build time assert for NVME_MAX_NR_DESCRIPTORS Christoph Hellwig
2025-06-25 16:38 ` new DMA API conversion for nvme-pci v3 Jens Axboe
2025-06-27 7:37 ` Daniel Gomez
2025-06-27 14:56 ` Jens Axboe
2025-06-27 17:53 ` Chaitanya Kulkarni
2025-06-26 15:33 ` Chaitanya Kulkarni [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bcdcb5eb-17ed-412f-bf5c-303079798fe2@nvidia.com \
--to=chaitanyak@nvidia.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=joshi.k@samsung.com \
--cc=kbusch@kernel.org \
--cc=leon@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=logang@deltatee.com \
--cc=nj.shetty@samsung.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox