From: Nilay Shroff <nilay@linux.ibm.com>
To: Hannes Reinecke <hare@suse.de>, linux-nvme@lists.infradead.org
Cc: dwagner@suse.de, hch@lst.de, kbusch@kernel.org, sagi@grimberg.me,
axboe@fb.com, gjoyce@linux.ibm.com
Subject: Re: [PATCHv4 RFC 1/1] nvme-multipath: Add sysfs attributes for showing multipath info
Date: Wed, 16 Oct 2024 08:49:08 +0530 [thread overview]
Message-ID: <2476d00b-35d3-4516-b76c-9b76a499e4fc@linux.ibm.com> (raw)
In-Reply-To: <48bf8a1d-3a47-4c94-8cd0-7d8c4ea5d3ab@linux.ibm.com>
Hi Hannes,
On 10/7/24 21:03, Nilay Shroff wrote:
>
>
> On 10/7/24 19:34, Hannes Reinecke wrote:
>> On 10/7/24 15:47, Nilay Shroff wrote:
>>>
>>>
>>> On 10/7/24 15:44, Hannes Reinecke wrote:
>>>> On 9/11/24 08:26, Nilay Shroff wrote:
>>>>> NVMe native multipath supports different IO policies for selecting I/O
>>>>> path, however we don't have any visibility about which path is being
>>>>> selected by multipath code for forwarding I/O.
>>>>> This patch helps add that visibility by adding new sysfs attribute files
>>>>> named "numa_nodes" and "queue_depth" under each namespace block device
>>>>> path /sys/block/nvmeXcYnZ/. We also create a "multipath" sysfs directory
>>>>> under head disk node and then from this directory add a link to each
>>>>> namespace path device this head disk node points to.
>>>>>
>>>>> For instance, /sys/block/nvmeXnY/multipath/ would create a soft link to
>>>>> each path the head disk node <nvmeXnY> points to:
>>>>>
>>>>> $ ls -1 /sys/block/nvme1n1/
>>>>> nvme1c1n1 -> ../../../../../pci052e:78/052e:78:00.0/nvme/nvme1/nvme1c1n1
>>>>> nvme1c3n1 -> ../../../../../pci058e:78/058e:78:00.0/nvme/nvme3/nvme1c3n1
>>>>>
>>>>> For round-robin I/O policy, we could easily infer from the above output
>>>>> that I/O workload targeted to nvme3n1 would toggle across paths nvme1c1n1
>>>>> and nvme1c3n1.
>>>>>
>>>>> For numa I/O policy, the "numa_nodes" attribute file shows the numa nodes
>>>>> being preferred by the respective block device path. The numa nodes value
>>>>> is comma delimited list of nodes or A-B range of nodes.
>>>>>
>>>>> For queue-depth I/O policy, the "queue_depth" attribute file shows the
>>>>> number of active/in-flight I/O requests currently queued for each path.
>>>>>
>>>>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>>>>> ---
>>>>> drivers/nvme/host/core.c | 3 ++
>>>>> drivers/nvme/host/multipath.c | 71 +++++++++++++++++++++++++++++++++++
>>>>> drivers/nvme/host/nvme.h | 20 ++++++++--
>>>>> drivers/nvme/host/sysfs.c | 20 ++++++++++
>>>>> 4 files changed, 110 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>>>>> index 983909a600ad..6be29fd64236 100644
>>>>> --- a/drivers/nvme/host/core.c
>>>>> +++ b/drivers/nvme/host/core.c
>>>>> @@ -3951,6 +3951,9 @@ static void nvme_ns_remove(struct nvme_ns *ns)
>>>>> if (!nvme_ns_head_multipath(ns->head))
>>>>> nvme_cdev_del(&ns->cdev, &ns->cdev_device);
>>>>> +
>>>>> + nvme_mpath_remove_sysfs_link(ns);
>>>>> +
>>>>> del_gendisk(ns->disk);
>>>>> mutex_lock(&ns->ctrl->namespaces_lock);
>>>>> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
>>>>> index 518e22dd4f9b..7d9c36a7a261 100644
>>>>> --- a/drivers/nvme/host/multipath.c
>>>>> +++ b/drivers/nvme/host/multipath.c
>>>>> @@ -654,6 +654,8 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
>>>>> nvme_add_ns_head_cdev(head);
>>>>> }
>>>>> + nvme_mpath_add_sysfs_link(ns);
>>>>> +
>>>>> mutex_lock(&head->lock);
>>>>> if (nvme_path_is_optimized(ns)) {
>>>>> int node, srcu_idx;
>>>> Nearly there.
>>> Thank you for your review comments!
>>>
>>>>
>>>> You can only call 'nvme_mpath_add_sysfs_link()' if the gendisk on the head had been created.
>>>>
>>>> And there is one branch in nvme_mpath_add_disk():
>>>>
>>>> if (desc.state) {
>>>> /* found the group desc: update */
>>>> nvme_update_ns_ana_state(&desc, ns);
>>>>
>>>> which does not go via nvme_mpath_set_live(), yet a device link would need to be create here, too.
>>>> But you can't call nvme_mpath_add_sysfs_link() from nvme_mpath_add_disk(), as the actual gendisk
>>>> might only be created later on during ANA log parsing.>>
>>>> It is a tangle, and I haven't found a good way out of this.
>>>> But I am _very much_ in favour of having these links, so please
>>>> update your patch.
>>>> In case disk supports ANA group then yes it would go through nvme_mpath_add_disk()->nvme_update_ns_ana_state();
>>>> and later nvme_update_ns_ana_state() would also fall through function nvme_mpath_set_live where we call
>>>> nvme_mpath_add_sysfs_link().
>>>
>>> So I think that in any case while multipath namespace is being created it has to go through
>>> nvme_mpath_set_live function. And as we see in nvme_mpath_set_live function, we only create
>>> sysfs link after the gendisk on the head is created. Do you agree with this? Or please let
>>> me know if you have any further question.
>>>
>> But it doesn't:
>>
>> if (nvme_state_is_live(ns->ana_state) &&
>> nvme_ctrl_state(ns->ctrl) == NVME_CTRL_LIVE)
>> nvme_mpath_set_live(ns);
>>
>> so if a namespace is being created for which nvme_state_is_live() returns 'false' nvme_mpath_set_live() is not called, and no links are being created.
>> This can happen eg. if the first namespace to be encountered is in any other state than 'optimized' or 'non-optimized'.
>>
>
> OK I got what you're suggesting here. So in this particular case when ANA state of a shared namespace
> is neither "optimized" nor "non-optimized", we would have gendisk for shared namepspace (i.e.
> nvmeXcYnZ) created but we don't have yet gendisk for corresponding head node (i.e. nvmeXnY) created.
> So without the gendisk for head node created, how could we create a link from it to the namespace node?
>
> The link from gendisk head node would be eventually created when the ANA state of the namespace transition
> to "optimized" or "non-optimized" state. I think, it's not anyways possible to have multipathing function
> enabled until the gendisk for the head node created, isn't it? So I don't yet understand why do we really
> need device link created if we don't have gendisk for head not ready? Am I missing anything here?
>
>> Cheers,
>>
>> Hannes
>
A gentle ping about this thread. Does change look okay or you have further comments?
Am I missing something here? If so, can you please help me with it?
Thanks,
--Nilay
next prev parent reply other threads:[~2024-10-16 3:38 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-11 6:26 [PATCHv4 RFC 0/1] Add visibility for native NVMe multipath using sysfs Nilay Shroff
2024-09-11 6:26 ` [PATCHv4 RFC 1/1] nvme-multipath: Add sysfs attributes for showing multipath info Nilay Shroff
2024-10-07 10:14 ` Hannes Reinecke
2024-10-07 13:47 ` Nilay Shroff
2024-10-07 14:04 ` Hannes Reinecke
2024-10-07 15:33 ` Nilay Shroff
2024-10-16 3:19 ` Nilay Shroff [this message]
2024-10-16 6:52 ` Hannes Reinecke
2024-10-21 12:24 ` Nilay Shroff
2024-10-20 23:17 ` Sagi Grimberg
2024-10-21 13:37 ` Nilay Shroff
2024-10-23 9:58 ` Sagi Grimberg
2024-10-23 13:31 ` Nilay Shroff
2024-09-24 6:41 ` [PATCHv4 RFC 0/1] Add visibility for native NVMe multipath using sysfs Nilay Shroff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2476d00b-35d3-4516-b76c-9b76a499e4fc@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=axboe@fb.com \
--cc=dwagner@suse.de \
--cc=gjoyce@linux.ibm.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox