From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: "song@kernel.org" <song@kernel.org>,
"yukuai@fnnas.com" <yukuai@fnnas.com>,
"linan122@huawei.com" <linan122@huawei.com>,
"kbusch@kernel.org" <kbusch@kernel.org>,
"axboe@kernel.dk" <axboe@kernel.dk>, "hch@lst.de" <hch@lst.de>
Cc: Chaitanya Kulkarni <chaitanyak@nvidia.com>,
"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Kiran Modukuri <kmodukuri@nvidia.com>,
"sagi@grimberg.me" <sagi@grimberg.me>
Subject: Re: [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath
Date: Tue, 21 Apr 2026 22:32:14 +0000 [thread overview]
Message-ID: <3c4d2714-cbd4-40ce-b845-be9a02561d71@nvidia.com> (raw)
In-Reply-To: <20260416212633.72650-1-kch@nvidia.com>
On 4/16/26 14:26, Chaitanya Kulkarni wrote:
> Hi,
>
> This patch series extends PCI peer-to-peer DMA (P2PDMA) support to enable
> direct data transfers between PCIe devices through RAID and NVMe multipath
> block layers.
>
> Current Linux kernel P2PDMA infrastructure supports direct peer-to-peer
> transfers, but this support is not propagated through certain storage
> stacks like MD RAID and NVMe multipath. This adds two patches for
> MD RAID 0/1/10 and NVMe to propogate P2PDMA support through the
> storage stack.
>
> All four test scenarios demonstrate that P2PDMA capabilities are correctly
> propagated through both the MD RAID layer (patch 1/2) and NVMe multipath
> layer (patch 2/2). Direct peer-to-peer transfers complete successfully with
> full data integrity verification, confirming that:
>
> 1. RAID devices properly inherit P2PDMA capability from member devices
> 2. NVMe multipath devices correctly expose P2PDMA support
> 3. P2P memory buffers can be used for transfers involving both types
> 4. Data integrity is maintained across all transfer combinations
>
> I've added the patch specific tests and blktest log as well at the end.
>
> Repo:-
>
> git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux.git
>
> Branch HEAD:-
>
> commit 88a57e15861997dd6fa98154ad087f7831bbead1 (origin/for-next)
> Merge: 81a0a2e4e535 36446de0c30c
> Author: Jens Axboe<axboe@kernel.dk>
> Date: Fri Apr 10 07:02:42 2026 -0600
>
> Merge branch 'for-7.1/block' into for-next
>
> * for-7.1/block:
> ublk: fix tautological comparison warning in ublk_ctrl_reg_buf
> -ck
>
> Changes from V2:-
>
> 1. Unconditionally set the BLK_FEAT_PCI_P2PDMA for md and nvme multipath.
> (Christoph)
> 2. Add a prep patch to diable BLK_FEAT_PCI_P2PDMA in the blk_stack_limit().
> (christoph)
>
> Changes from V1:-
> - Update patch 1 to explicitly support MD RAID 0/1/10.
> - Fix signoff chain order for patch 2.
> - Clear BLK_FEAT_PCI_P2PDMA in nvme_mpath_add_disk() when a newly
> added path does not support it, to handle multipath across different
> transports.
> - Add nvme multipath test log for mixed transport TCP and PCIe.
>
> Chaitanya Kulkarni (1):
> block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for
> non-supporting devices
>
> Kiran Kumar Modukuri (2):
> md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device
> nvme-multipath: enable PCI P2PDMA for multipath devices
>
> block/blk-settings.c | 2 ++
> drivers/md/raid0.c | 1 +
> drivers/md/raid1.c | 1 +
> drivers/md/raid10.c | 1 +
> drivers/nvme/host/multipath.c | 2 +-
> 5 files changed, 6 insertions(+), 1 deletion(-)
If there is no objection is it possible to merge this via
linux-block/for-next ?
-ck
next prev parent reply other threads:[~2026-04-21 22:32 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-04-17 7:52 ` Christoph Hellwig
2026-04-17 10:11 ` Nitesh Shetty
2026-04-21 22:30 ` Chaitanya Kulkarni
2026-04-22 5:30 ` Nitesh Shetty
2026-04-22 6:20 ` Christoph Hellwig
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-21 9:18 ` Xiao Ni
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-17 7:53 ` Christoph Hellwig
2026-04-17 10:42 ` Nitesh Shetty
2026-04-21 22:32 ` Chaitanya Kulkarni [this message]
2026-04-22 6:22 ` [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath hch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3c4d2714-cbd4-40ce-b845-be9a02561d71@nvidia.com \
--to=chaitanyak@nvidia.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kmodukuri@nvidia.com \
--cc=linan122@huawei.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=sagi@grimberg.me \
--cc=song@kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox