Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Hector Martin <marcan@marcan.st>,
	Sven Peter <sven@svenpeter.dev>, Keith Busch <kbusch@kernel.org>,
	Sagi Grimberg <sagi@grimberg.me>,
	James Smart <james.smart@broadcom.com>,
	Chaitanya Kulkarni <kch@nvidia.com>
Cc: Alyssa Rosenzweig <alyssa@rosenzweig.io>,
	asahi@lists.linux.dev, linux-nvme@lists.infradead.org
Subject: Re: [PATCH 13/21] nvme: split out a nvme_identify_ns_nvm helper
Date: Thu, 29 Feb 2024 15:52:08 +0200	[thread overview]
Message-ID: <a142cd4d-bedf-4666-8dc7-fcfbfc4ebb78@nvidia.com> (raw)
In-Reply-To: <20240228181215.873854-14-hch@lst.de>



On 28/02/2024 20:12, Christoph Hellwig wrote:
> Split the logic to query the Identify Namespace Data Structure, NVM
> Command Set into a separate helper.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/nvme/host/core.c | 38 ++++++++++++++++++++++++++------------
>   1 file changed, 26 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 78667ba89ec491..adcd11720d1bb4 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1831,12 +1831,35 @@ static bool nvme_ns_ids_equal(struct nvme_ns_ids *a, struct nvme_ns_ids *b)
>   		a->csi == b->csi;
>   }
>   
> +static int nvme_identify_ns_nvm(struct nvme_ctrl *ctrl, unsigned int nsid,
> +		struct nvme_id_ns_nvm **nvmp)
> +{
> +	struct nvme_command c = {
> +		.identify.opcode	= nvme_admin_identify,
> +		.identify.nsid		= cpu_to_le32(nsid),
> +		.identify.cns		= NVME_ID_CNS_CS_NS,
> +		.identify.csi		= NVME_CSI_NVM,
> +	};
> +	struct nvme_id_ns_nvm *nvm;
> +	int ret;
> +
> +	nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
> +	if (!nvm)
> +		return -ENOMEM;
> +
> +	ret = nvme_submit_sync_cmd(ctrl->admin_q, &c, nvm, sizeof(*nvm));
> +	if (ret)
> +		kfree(nvm);
> +	else
> +		*nvmp = nvm;
> +	return ret;
> +}
> +
>   static int nvme_init_ms(struct nvme_ctrl *ctrl, struct nvme_ns_head *head,
>   		struct nvme_id_ns *id)
>   {
>   	bool first = id->dps & NVME_NS_DPS_PI_FIRST;
>   	unsigned lbaf = nvme_lbaf_index(id->flbas);
> -	struct nvme_command c = { };
>   	struct nvme_id_ns_nvm *nvm;
>   	int ret = 0;
>   	u32 elbaf;
> @@ -1849,18 +1872,9 @@ static int nvme_init_ms(struct nvme_ctrl *ctrl, struct nvme_ns_head *head,
>   		goto set_pi;
>   	}
>   
> -	nvm = kzalloc(sizeof(*nvm), GFP_KERNEL);
> -	if (!nvm)
> -		return -ENOMEM;
> -
> -	c.identify.opcode = nvme_admin_identify;
> -	c.identify.nsid = cpu_to_le32(head->ns_id);
> -	c.identify.cns = NVME_ID_CNS_CS_NS;
> -	c.identify.csi = NVME_CSI_NVM;
> -
> -	ret = nvme_submit_sync_cmd(ctrl->admin_q, &c, nvm, sizeof(*nvm));
> +	ret = nvme_identify_ns_nvm(ctrl, head->ns_id, &nvm);
>   	if (ret)
> -		goto free_data;
> +		goto set_pi;
>   
>   	elbaf = le32_to_cpu(nvm->elbaf[lbaf]);

I actually had a similar logic in one of the patches of the PI series 
I've sent :)
BTW, we need only the elbaf field from the nvm response structure so we 
can ease the code a bit and free the nvm inside the 
nvme_identify_ns_nvm() instead of in the caller...

Otherwise, looks good
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>

>   


  reply	other threads:[~2024-02-29 13:52 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-28 18:11 convert nvme to atomic queue limits updates Christoph Hellwig
2024-02-28 18:11 ` [PATCH 01/21] block: add a queue_limits_set helper Christoph Hellwig
2024-02-28 18:11 ` [PATCH 02/21] block: add a queue_limits_stack_bdev helper Christoph Hellwig
2024-02-28 18:11 ` [PATCH 03/21] nvme: set max_hw_sectors unconditionally Christoph Hellwig
2024-02-29 10:46   ` Max Gurtovoy
2024-02-28 18:11 ` [PATCH 04/21] nvme: move NVME_QUIRK_DEALLOCATE_ZEROES out of nvme_config_discard Christoph Hellwig
2024-02-29 10:48   ` Max Gurtovoy
2024-02-28 18:11 ` [PATCH 05/21] nvme: remove nvme_revalidate_zones Christoph Hellwig
2024-02-28 23:47   ` Damien Le Moal
2024-02-28 18:12 ` [PATCH 06/21] nvme: move max_integrity_segments handling out of nvme_init_integrity Christoph Hellwig
2024-02-29 10:58   ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 07/21] nvme: cleanup the nvme_init_integrity calling conventions Christoph Hellwig
2024-02-29 12:33   ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 08/21] nvme: move blk_integrity_unregister into nvme_init_integrity Christoph Hellwig
2024-02-29 12:36   ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 09/21] nvme: don't use nvme_update_disk_info for the multipath disk Christoph Hellwig
2024-02-29 12:47   ` Max Gurtovoy
2024-02-29 13:02     ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 10/21] nvme: move a few things out of nvme_update_disk_info Christoph Hellwig
2024-02-28 18:12 ` [PATCH 11/21] nvme: move setting the write cache flags out of nvme_set_queue_limits Christoph Hellwig
2024-02-29 13:11   ` Max Gurtovoy
2024-02-28 18:12 ` [PATCH 12/21] nvme: move common logic into nvme_update_ns_info Christoph Hellwig
2024-02-29 13:30   ` Max Gurtovoy
2024-02-29 13:40     ` Christoph Hellwig
2024-02-28 18:12 ` [PATCH 13/21] nvme: split out a nvme_identify_ns_nvm helper Christoph Hellwig
2024-02-29 13:52   ` Max Gurtovoy [this message]
2024-02-29 13:53     ` Christoph Hellwig
2024-02-28 18:12 ` [PATCH 14/21] nvme: don't query identify data in configure_metadata Christoph Hellwig
2024-02-28 18:12 ` [PATCH 15/21] nvme: cleanup nvme_configure_metadata Christoph Hellwig
2024-02-28 18:12 ` [PATCH 16/21] nvme-rdma: initialize max_hw_sectors earlier Christoph Hellwig
2024-02-28 18:12 ` [PATCH 17/21] nvme-loop: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 18/21] nvme-fc: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 19/21] nvme-apple: " Christoph Hellwig
2024-02-28 18:12 ` [PATCH 20/21] nvme: use the atomic queue limits update API Christoph Hellwig
2024-02-28 18:12 ` [PATCH 21/21] nvme-multipath: pass queue_limits to blk_alloc_disk Christoph Hellwig
2024-03-01 16:20 ` convert nvme to atomic queue limits updates Keith Busch
2024-03-02 13:59   ` Christoph Hellwig
2024-03-02 23:21     ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a142cd4d-bedf-4666-8dc7-fcfbfc4ebb78@nvidia.com \
    --to=mgurtovoy@nvidia.com \
    --cc=alyssa@rosenzweig.io \
    --cc=asahi@lists.linux.dev \
    --cc=hch@lst.de \
    --cc=james.smart@broadcom.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=marcan@marcan.st \
    --cc=sagi@grimberg.me \
    --cc=sven@svenpeter.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox