public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: John Garry <john.g.garry@oracle.com>
To: Alan Adamson <alan.adamson@oracle.com>, linux-nvme@lists.infradead.org
Cc: kch@nvidia.com, kbusch@kernel.org, hch@lst.de, sagi@grimberg.me
Subject: Re: [PATCH 1/2] nvme: multipath: atomic queue limits need to be inherited
Date: Thu, 1 May 2025 06:47:00 +0100	[thread overview]
Message-ID: <55366449-8011-42eb-8b20-fdff02c15eda@oracle.com> (raw)
In-Reply-To: <20250430171830.1494033-2-alan.adamson@oracle.com>

On 30/04/2025 18:18, Alan Adamson wrote:

Thanks for doing this.

> When a controller is attached that has the CMIC.MCTRS bit set, it indicates
> the subsystem supports multiple controllers and it is possible a namespace
> can be shared between those multiple controllers in a multipathed
> configuration.
> 
> When a namespace of a CMIC.MCTRS enabled subsystem is allocated, a
> multipath node is created.  The queue limits for this node are inherited
> from the namespace being allocated. When inheriting queue limits, the
> features being inherited need to be specified. The atomic write feature
> (BLK_FEAT_ATOMIC_WRITES) was not specified so the atomic queue limits
> were not inherited by the multipath disk node which resulted in the sysfs
> atomic write attributes being zeroed. The fix is to include
> BLK_FEAT_ATOMIC_WRITES in the list of features to be inherited.

nit: is this really being inherited? It seems to be just be explicitly 
enabled.

> 
> Signed-off-by: Alan Adamson <alan.adamson@oracle.com>
> Signed-off-by: John Garry <john.g.garry@oracle.com>

Please add the following instead Signed-off-by:
Reviewed-by: John Garry <john.g.garry@oracle.com>

> ---
>   drivers/nvme/host/multipath.c | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 250f3da67cc9..cf0ef4745564 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -638,7 +638,8 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
>   
>   	blk_set_stacking_limits(&lim);
>   	lim.dma_alignment = 3;
> -	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
> +	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT |
> +		BLK_FEAT_POLL | BLK_FEAT_ATOMIC_WRITES;
>   	if (head->ids.csi == NVME_CSI_ZNS)
>   		lim.features |= BLK_FEAT_ZONED;
>   



  reply	other threads:[~2025-05-01  5:48 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-30 17:18 [PATCH 0/2] NVMe Atomic Write fixes Alan Adamson
2025-04-30 17:18 ` [PATCH 1/2] nvme: multipath: atomic queue limits need to be inherited Alan Adamson
2025-05-01  5:47   ` John Garry [this message]
2025-05-01  6:06     ` John Garry
2025-04-30 17:18 ` [PATCH 2/2] nvme: all namespaces in a subsystem must adhere to a common atomic write size Alan Adamson
2025-05-02  6:52   ` Christoph Hellwig
2025-05-06 15:57     ` alan.adamson
2025-05-07  6:46       ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55366449-8011-42eb-8b20-fdff02c15eda@oracle.com \
    --to=john.g.garry@oracle.com \
    --cc=alan.adamson@oracle.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox