From: Keith Busch <kbusch@kernel.org>
To: Nilay Shroff <nilay@linux.ibm.com>
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me,
axboe@fb.com, gjoyce@linux.ibm.com
Subject: Re: [PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs
Date: Wed, 24 Jul 2024 08:37:12 -0600 [thread overview]
Message-ID: <ZqERmEICdLEvQ6aQ@kbusch-mbp> (raw)
In-Reply-To: <20240722093124.42581-1-nilay@linux.ibm.com>
On Mon, Jul 22, 2024 at 03:01:08PM +0530, Nilay Shroff wrote:
> # cat /sys/kernel/debug/block/nvme1n1/multipath
> io-policy: queue-depth
> io-path:
> --------
> node path ctrl qdepth ana-state
> 2 nvme1c1n1 nvme1 1328 optimized
> 2 nvme1c3n1 nvme3 1324 optimized
> 3 nvme1c1n1 nvme1 1328 optimized
> 3 nvme1c3n1 nvme3 1324 optimized
>
> The above output was captured while I/O was running and accessing
> namespace nvme1n1. From the above output, we see that iopolicy is set to
> "queue-depth". When we have I/O workload running on numa node 2, accessing
> namespace "nvme1n1", the I/O path nvme1c1n1/nvme1 has queue depth of 1328
> and another I/O path nvme1c3n1/nvme3 has queue depth of 1324. Both paths
> are optimized and seems that both paths are equally utilized for
> forwarding I/O.
You can get the outstanding queue-depth from iostats too, and that
doesn't rely on queue-depth io policy. It does, however, require stats
are enabled, but that's probably a more reasonable given than an io
policy.
> The same could be said for workload running on numa
> node 3.
The output for all numa nodes will be the same regardless of which node
a workload is running on (the accounting isn't per-node), so I'm not
sure outputting qdepth again for each node is useful.
next prev parent reply other threads:[~2024-07-24 14:37 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-22 9:31 [PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs Nilay Shroff
2024-07-22 9:31 ` [PATCH RFC 1/1] nvme-multipath: Add debugfs entry for showing multipath info Nilay Shroff
2024-07-22 14:18 ` [PATCH RFC 0/1] Add visibility for native NVMe miltipath using debugfs Daniel Wagner
2024-07-23 5:18 ` Nilay Shroff
2024-07-23 7:40 ` Daniel Wagner
2024-07-24 13:41 ` Christoph Hellwig
2024-07-25 6:23 ` Nilay Shroff
2024-07-24 14:37 ` Keith Busch [this message]
2024-07-25 6:20 ` Nilay Shroff
2024-07-28 20:47 ` Sagi Grimberg
2024-07-29 4:50 ` Nilay Shroff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZqERmEICdLEvQ6aQ@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=axboe@fb.com \
--cc=gjoyce@linux.ibm.com \
--cc=hch@lst.de \
--cc=linux-nvme@lists.infradead.org \
--cc=nilay@linux.ibm.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox