From: Chuck Lever <chuck.lever@oracle.com>
To: Christoph Hellwig <hch@lst.de>
Cc: kbusch@kernel.org, sagi@grimberg.me,
linux-nvme@lists.infradead.org, linux-nfs@vger.kernel.org
Subject: Re: [PATCH] nvme: implement ->get_unique_id
Date: Fri, 5 Jul 2024 13:15:32 -0400 [thread overview]
Message-ID: <ZogqNFka2384O0pT@tissot.1015granger.net> (raw)
In-Reply-To: <20240705164640.2247869-1-hch@lst.de>
On Fri, Jul 05, 2024 at 06:46:26PM +0200, Christoph Hellwig wrote:
> Implement the get_unique_id method to allow pNFS SCSI layout access to
> NVMe namespaces.
>
> This is the server side implementation of RFC 9561 "Using the Parallel
> NFS (pNFS) SCSI Layout to Access Non-Volatile Memory Express (NVMe)
> Storage Devices".
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> drivers/nvme/host/core.c | 27 +++++++++++++++++++++++++++
> drivers/nvme/host/multipath.c | 16 ++++++++++++++++
> drivers/nvme/host/nvme.h | 3 +++
> 3 files changed, 46 insertions(+)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 782090ce0bc10d..96e0879013b79d 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2230,6 +2230,32 @@ static int nvme_update_ns_info(struct nvme_ns *ns, struct nvme_ns_info *info)
> return ret;
> }
>
> +int nvme_ns_get_unique_id(struct nvme_ns *ns, u8 id[16],
> + enum blk_unique_id type)
> +{
> + struct nvme_ns_ids *ids = &ns->head->ids;
> +
> + if (type != BLK_UID_EUI64)
> + return -EINVAL;
> +
> + if (memchr_inv(ids->nguid, 0, sizeof(ids->nguid))) {
> + memcpy(id, &ids->nguid, sizeof(ids->nguid));
> + return sizeof(ids->nguid);
> + }
> + if (memchr_inv(ids->eui64, 0, sizeof(ids->eui64))) {
> + memcpy(id, &ids->eui64, sizeof(ids->eui64));
> + return sizeof(ids->eui64);
> + }
> +
> + return -EINVAL;
> +}
> +
> +static int nvme_get_unique_id(struct gendisk *disk, u8 id[16],
> + enum blk_unique_id type)
> +{
> + return nvme_ns_get_unique_id(disk->private_data, id, type);
> +}
> +
> #ifdef CONFIG_BLK_SED_OPAL
> static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
> bool send)
> @@ -2285,6 +2311,7 @@ const struct block_device_operations nvme_bdev_ops = {
> .open = nvme_open,
> .release = nvme_release,
> .getgeo = nvme_getgeo,
> + .get_unique_id = nvme_get_unique_id,
> .report_zones = nvme_report_zones,
> .pr_ops = &nvme_pr_ops,
> };
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index d8b6b4648eaff9..1aed93d792b610 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -427,6 +427,21 @@ static void nvme_ns_head_release(struct gendisk *disk)
> nvme_put_ns_head(disk->private_data);
> }
>
> +static int nvme_ns_head_get_unique_id(struct gendisk *disk, u8 id[16],
> + enum blk_unique_id type)
> +{
> + struct nvme_ns_head *head = disk->private_data;
> + struct nvme_ns *ns;
> + int srcu_idx, ret = -EWOULDBLOCK;
> +
> + srcu_idx = srcu_read_lock(&head->srcu);
> + ns = nvme_find_path(head);
> + if (ns)
> + ret = nvme_ns_get_unique_id(ns, id, type);
> + srcu_read_unlock(&head->srcu, srcu_idx);
> + return ret;
> +}
> +
> #ifdef CONFIG_BLK_DEV_ZONED
> static int nvme_ns_head_report_zones(struct gendisk *disk, sector_t sector,
> unsigned int nr_zones, report_zones_cb cb, void *data)
> @@ -454,6 +469,7 @@ const struct block_device_operations nvme_ns_head_ops = {
> .ioctl = nvme_ns_head_ioctl,
> .compat_ioctl = blkdev_compat_ptr_ioctl,
> .getgeo = nvme_getgeo,
> + .get_unique_id = nvme_ns_head_get_unique_id,
> .report_zones = nvme_ns_head_report_zones,
> .pr_ops = &nvme_pr_ops,
> };
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index f3a41133ac3f97..1907fbc3f5dbb3 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -1062,6 +1062,9 @@ static inline bool nvme_disk_is_ns_head(struct gendisk *disk)
> }
> #endif /* CONFIG_NVME_MULTIPATH */
>
> +int nvme_ns_get_unique_id(struct nvme_ns *ns, u8 id[16],
> + enum blk_unique_id type);
> +
> struct nvme_zone_info {
> u64 zone_size;
> unsigned int max_open_zones;
> --
> 2.43.0
>
>
I am happy to see this. Fwiw:
Acked-by: Chuck Lever <chuck.lever@oracle.com>
I will connect with you offline to advise me on setting up a test
harness similar to what I have with iSCSI.
--
Chuck Lever
next prev parent reply other threads:[~2024-07-05 17:16 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-05 16:46 [PATCH] nvme: implement ->get_unique_id Christoph Hellwig
2024-07-05 17:15 ` Chuck Lever [this message]
2024-07-05 17:32 ` Christoph Hellwig
2024-07-06 21:38 ` Sagi Grimberg
2024-07-08 17:50 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZogqNFka2384O0pT@tissot.1015granger.net \
--to=chuck.lever@oracle.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox