public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Nitesh Shetty <nj.shetty@samsung.com>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: "song@kernel.org" <song@kernel.org>,
	"yukuai@fnnas.com" <yukuai@fnnas.com>,
	"linan122@huawei.com" <linan122@huawei.com>,
	"kbusch@kernel.org" <kbusch@kernel.org>,
	"axboe@kernel.dk" <axboe@kernel.dk>, "hch@lst.de" <hch@lst.de>,
	"sagi@grimberg.me" <sagi@grimberg.me>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	Kiran Modukuri <kmodukuri@nvidia.com>
Subject: Re: [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices
Date: Tue, 21 Apr 2026 22:30:07 +0000	[thread overview]
Message-ID: <a2a14d08-cb31-453a-9ddd-71f7dd4d6992@nvidia.com> (raw)
In-Reply-To: <20260417101127.uku7q65akj6gwwh4@green245.gost>

On 4/17/26 03:11, Nitesh Shetty wrote:
> On 16/04/26 02:26PM, Chaitanya Kulkarni wrote:
>> BLK_FEAT_NOWAIT and BLK_FEAT_POLL are cleared in blk_stack_limits()
>> when an underlying device does not support them.  Apply the same
>> treatment to BLK_FEAT_PCI_P2PDMA: stacking drivers set it
>> unconditionally and rely on the core to clear it whenever a
>> non-supporting member device is stacked.
>>
>> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
>> ---
>> block/blk-settings.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/block/blk-settings.c b/block/blk-settings.c
>> index 78c83817b9d3..8274631290db 100644
>> --- a/block/blk-settings.c
>> +++ b/block/blk-settings.c
>> @@ -795,6 +795,8 @@ int blk_stack_limits(struct queue_limits *t, 
>> struct queue_limits *b,
>>         t->features &= ~BLK_FEAT_NOWAIT;
>>     if (!(b->features & BLK_FEAT_POLL))
>>         t->features &= ~BLK_FEAT_POLL;
>> +    if (!(b->features & BLK_FEAT_PCI_P2PDMA))
>> +        t->features &= ~BLK_FEAT_PCI_P2PDMA;
>>
>>     t->flags |= (b->flags & BLK_FLAG_MISALIGNED);
> I think you need below patch[1] as well to unset this here.
> Also I feel better to include Mike,Mikulas and dm-devel mailing list 
> as well.
>

get_maintainers.pl didn't list Mike/Mikulas and dm-devel.


> Thanks,
> Nitesh
>
> [1]
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index dc2eff6b739d..0442c1f4c686 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -590,7 +590,8 @@ int dm_split_args(int *argc, char ***argvp, char 
> *input)
>  static void dm_set_stacking_limits(struct queue_limits *limits)
>  {
>      blk_set_stacking_limits(limits);
> -    limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | 
> BLK_FEAT_POLL;
> +    limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | 
> BLK_FEAT_POLL \
> +    BLK_FEAT_PCI_P2PDMA;
>  }
>
DM P2PDMA support needs a separate DM patch series with explicit target
capability plumbing and enablement only for vetted targets.
Enabling it blindly for all the targets may not be accepted.

This series is limited to md RAID0/1/10 and native NVMe multipath.

I’ll send an incremental DM series separately and will CC the DM
maintainers and dm-devel, thanks for looking into it.

-ck



  reply	other threads:[~2026-04-21 22:30 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-16 21:26 [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-16 21:26 ` [PATCH V3 1/3] block: clear BLK_FEAT_PCI_P2PDMA in blk_stack_limits() for non-supporting devices Chaitanya Kulkarni
2026-04-17  7:52   ` Christoph Hellwig
2026-04-17 10:11   ` Nitesh Shetty
2026-04-21 22:30     ` Chaitanya Kulkarni [this message]
2026-04-22  5:30       ` Nitesh Shetty
2026-04-22  6:20     ` Christoph Hellwig
2026-04-16 21:26 ` [PATCH V3 2/3] md: propagate BLK_FEAT_PCI_P2PDMA from member devices to RAID device Chaitanya Kulkarni
2026-04-17  7:53   ` Christoph Hellwig
2026-04-21  9:18   ` Xiao Ni
2026-04-16 21:26 ` [PATCH V3 3/3] nvme-multipath: enable PCI P2PDMA for multipath devices Chaitanya Kulkarni
2026-04-17  7:53   ` Christoph Hellwig
2026-04-17 10:42   ` Nitesh Shetty
2026-04-21 22:32 ` [PATCH V3 0/2] md/nvme: Enable PCI P2PDMA support for RAID0 and NVMe Multipath Chaitanya Kulkarni
2026-04-22  6:22   ` hch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a2a14d08-cb31-453a-9ddd-71f7dd4d6992@nvidia.com \
    --to=chaitanyak@nvidia.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kmodukuri@nvidia.com \
    --cc=linan122@huawei.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=nj.shetty@samsung.com \
    --cc=sagi@grimberg.me \
    --cc=song@kernel.org \
    --cc=yukuai@fnnas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox