From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: "song@kernel.org" <song@kernel.org>,
"yukuai@fnnas.com" <yukuai@fnnas.com>,
"linan122@huawei.com" <linan122@huawei.com>,
"kbusch@kernel.org" <kbusch@kernel.org>,
"axboe@kernel.dk" <axboe@kernel.dk>,
"sagi@grimberg.me" <sagi@grimberg.me>,
"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
Kiran Modukuri <kmodukuri@nvidia.com>
Subject: Re: [PATCH V2 2/2] nvme-multipath: enable PCI P2PDMA for multipath devices
Date: Tue, 14 Apr 2026 02:57:58 +0000 [thread overview]
Message-ID: <0ed18538-695b-4915-bb78-4e154a2a2d9d@nvidia.com> (raw)
In-Reply-To: <20260409062659.GA7335@lst.de>
On 4/8/26 23:26, Christoph Hellwig wrote:
> On Wed, Apr 08, 2026 at 12:25:37AM -0700, Chaitanya Kulkarni wrote:
>> From: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
>>
>> NVMe multipath does not expose BLK_FEAT_PCI_P2PDMA on the head disk
>> even when the underlying controller supports it.
>>
>> Set BLK_FEAT_PCI_P2PDMA in nvme_mpath_alloc_disk() when the controller
>> advertises P2PDMA support via ctrl->ops->supports_pci_p2pdma.
>>
>> Since multipath can match paths across different transports (e.g. PCIe
>> and FC), not all paths are guaranteed to support P2PDMA. Clear
>> BLK_FEAT_PCI_P2PDMA from the head disk in nvme_mpath_add_disk() if the
>> newly added path does not support it, ensuring the feature is only
>> advertised when every member supports it.
>>
>> Signed-off-by: Kiran Kumar Modukuri <kmodukuri@nvidia.com>
>> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
>> ---
>> drivers/nvme/host/multipath.c | 18 ++++++++++++++++++
>> 1 file changed, 18 insertions(+)
>>
>> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
>> index ba00f0b72b85..48d920ce803f 100644
>> --- a/drivers/nvme/host/multipath.c
>> +++ b/drivers/nvme/host/multipath.c
>> @@ -737,6 +737,9 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
>> BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES;
>> if (head->ids.csi == NVME_CSI_ZNS)
>> lim.features |= BLK_FEAT_ZONED;
>> + if (ctrl->ops && ctrl->ops->supports_pci_p2pdma &&
>> + ctrl->ops->supports_pci_p2pdma(ctrl))
>> + lim.features |= BLK_FEAT_PCI_P2PDMA;
> I think we can just add this unconditionally here, similar to all the
> other feaures above the ZNS conditional as any non-supporting controller
> will clear it later.
>
>>
>> void nvme_mpath_add_disk(struct nvme_ns *ns, __le32 anagrpid)
>> {
>> + struct nvme_ns_head *head = ns->head;
>> +
>> + /*
>> + * Clear BLK_FEAT_PCI_P2PDMA on the head disk if this path does not
>> + * support it. Multipath may span different transports (e.g. PCIe and
>> + * FC), so every member must support P2PDMA for it to be safe on the
>> + * head disk.
>> + */
>> + if (head->disk && !blk_queue_pci_p2pdma(ns->queue)) {
>> + struct queue_limits lim =
>> + queue_limits_start_update(head->disk->queue);
>> + lim.features &= ~BLK_FEAT_PCI_P2PDMA;
>> + queue_limits_commit_update(head->disk->queue, &lim);
>> + }
> And this really should go into the core block code in blk_stack_limits,
> so that BLK_FEAT_PCI_P2PDMA is cleared whenever a non-confirming
> device is added, similar to BLK_FEAT_NOWAIT and BLK_FEAT_POLL.
>
I've a V3 with all the comments fixed including the prep patch
to move the above into the block layer.
Waiting to get it tested on with Kiran's setup to generate the test log.
-ck
prev parent reply other threads:[~2026-04-14 2:58 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-08 7:25 [PATCH V2 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-08 7:25 ` [PATCH V2 1/2] md: propagate BLK_FEAT_PCI_P2PDMA from member devices Chaitanya Kulkarni
2026-04-09 6:27 ` Christoph Hellwig
2026-04-08 7:25 ` [PATCH V2 2/2] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-09 6:26 ` Christoph Hellwig
2026-04-14 2:57 ` Chaitanya Kulkarni [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0ed18538-695b-4915-bb78-4e154a2a2d9d@nvidia.com \
--to=chaitanyak@nvidia.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kmodukuri@nvidia.com \
--cc=linan122@huawei.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=sagi@grimberg.me \
--cc=song@kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox