From: Chaitanya Kulkarni <kch@nvidia.com>
To: <song@kernel.org>, <yukuai@fnnas.com>, <linan122@huawei.com>,
<kbusch@kernel.org>, <axboe@kernel.dk>, <hch@lst.de>,
<sagi@grimberg.me>
Cc: <linux-block@vger.kernel.org>, <linux-raid@vger.kernel.org>,
<linux-nvme@lists.infradead.org>, <kmodukuri@nvidia.com>,
Pranjal Shrivastava <praan@google.com>, Xiao Ni <xni@redhat.com>,
Chaitanya Kulkarni <kch@nvidia.com>
Subject: [PATCH V4 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device
Date: Wed, 13 May 2026 11:51:52 -0700 [thread overview]
Message-ID: <20260513185153.95552-3-kch@nvidia.com> (raw)
In-Reply-To: <20260513185153.95552-1-kch@nvidia.com>
From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
MD RAID does not propagate BLK_FEAT_PCI_P2PDMA from member devices to
the RAID device, preventing peer-to-peer DMA through the RAID layer even
when all underlying devices support it.
Enable BLK_FEAT_PCI_P2PDMA unconditionally in raid0, raid1 and raid10
personalities during queue limits setup. blk_stack_limits() clears it
automatically if any member device lacks support, consistent with how
BLK_FEAT_NOWAIT and BLK_FEAT_POLL are handled in the block core.
Parity RAID personalities (raid4/5/6) are excluded because they require
CPU access to data pages for parity computation, which is incompatible
with P2P mappings.
Tested with RAID0/1/10 arrays containing multiple NVMe devices with
P2PDMA support, confirming that peer-to-peer transfers work correctly
through the RAID layer.
Tested-by: Pranjal Shrivastava<praan@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
drivers/md/raid0.c | 1 +
drivers/md/raid1.c | 1 +
drivers/md/raid10.c | 1 +
3 files changed, 3 insertions(+)
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 5e38a51e349a..2cdaf7495d92 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -392,6 +392,7 @@ static int raid0_set_limits(struct mddev *mddev)
lim.io_opt = lim.io_min * mddev->raid_disks;
lim.chunk_sectors = mddev->chunk_sectors;
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 64d970e2ef50..cc628a1be52c 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3208,6 +3208,7 @@ static int raid1_set_limits(struct mddev *mddev)
lim.max_hw_wzeroes_unmap_sectors = 0;
lim.logical_block_size = mddev->logical_block_size;
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 39085e7dd6d2..f905dc391b74 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3941,6 +3941,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
lim.chunk_sectors = mddev->chunk_sectors;
lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
lim.features |= BLK_FEAT_ATOMIC_WRITES;
+ lim.features |= BLK_FEAT_PCI_P2PDMA;
err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
if (err)
return err;
--
2.39.5
next prev parent reply other threads:[~2026-05-13 18:53 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-13 18:51 [PATCH V4 0/3] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-05-13 18:51 ` [PATCH V4 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-05-13 18:51 ` Chaitanya Kulkarni [this message]
2026-05-13 18:51 ` [PATCH V4 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-05-15 4:35 ` [PATCH V4 0/3] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260513185153.95552-3-kch@nvidia.com \
--to=kch@nvidia.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kmodukuri@nvidia.com \
--cc=linan122@huawei.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=praan@google.com \
--cc=sagi@grimberg.me \
--cc=song@kernel.org \
--cc=xni@redhat.com \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.