From: Christoph Hellwig <hch@lst.de>
To: Chaitanya Kulkarni <kch@nvidia.com>
Cc: song@kernel.org, yukuai@fnnas.com, linan122@huawei.com,
kbusch@kernel.org, axboe@kernel.dk, hch@lst.de, sagi@grimberg.me,
linux-raid@vger.kernel.org, linux-nvme@lists.infradead.org,
kmodukuri@nvidia.com
Subject: Re: [PATCH V2 1/2] md: propagate BLK_FEAT_PCI_P2PDMA from member devices
Date: Thu, 9 Apr 2026 08:27:48 +0200 [thread overview]
Message-ID: <20260409062748.GB7335@lst.de> (raw)
In-Reply-To: <20260408072537.46540-2-kch@nvidia.com>
On Wed, Apr 08, 2026 at 12:25:36AM -0700, Chaitanya Kulkarni wrote:
> From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
>
> MD RAID does not propagate BLK_FEAT_PCI_P2PDMA from member devices to
> the RAID device, preventing peer-to-peer DMA through the RAID layer even
> when all underlying devices support it.
>
> Enable BLK_FEAT_PCI_P2PDMA in raid0, raid1 and raid10 personalities
> during queue limits setup and clear it in mddev_stack_rdev_limits()
> during array init and mddev_stack_new_rdev() during hot-add if any
> member device lacks support. Parity RAID personalities (raid4/5/6) are
> excluded because they need CPU access to data pages for parity
> computation, which is incompatible with P2P mappings.
>
> Tested with RAID0/1/10 arrays containing multiple NVMe devices with P2PDMA
> support, confirming that peer-to-peer transfers work correctly through
> the RAID layer.
Same thing as for nvme-multipath applies here - just set
BLK_FEAT_PCI_P2PDMA unconditionally at setup time for the personality
that support it, and then rely on an updated blk_stack_limits to clear
it.
next prev parent reply other threads:[~2026-04-09 6:27 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-08 7:25 [PATCH V2 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-08 7:25 ` [PATCH V2 1/2] md: propagate BLK_FEAT_PCI_P2PDMA from member devices Chaitanya Kulkarni
2026-04-09 6:27 ` Christoph Hellwig [this message]
2026-04-08 7:25 ` [PATCH V2 2/2] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-09 6:26 ` Christoph Hellwig
2026-04-14 2:57 ` Chaitanya Kulkarni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260409062748.GB7335@lst.de \
--to=hch@lst.de \
--cc=axboe@kernel.dk \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kmodukuri@nvidia.com \
--cc=linan122@huawei.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=sagi@grimberg.me \
--cc=song@kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.