Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Nilay Shroff <nilay@linux.ibm.com>
To: Sagi Grimberg <sagi@grimberg.me>, linux-nvme@lists.infradead.org
Cc: kbusch@kernel.org, hch@lst.de, hare@suse.de,
	chaitanyak@nvidia.com, gjoyce@linux.ibm.com
Subject: Re: [RFC PATCH 4/4] nvme: expose queue information via debugfs
Date: Mon, 27 Apr 2026 17:42:50 +0530	[thread overview]
Message-ID: <4d64ea94-9690-4295-8544-52f3c4580e68@linux.ibm.com> (raw)
In-Reply-To: <63b9aa3e-978c-4448-828f-9119f8e210ef@grimberg.me>

On 4/25/26 3:53 AM, Sagi Grimberg wrote:
> 
> 
> On 20/04/2026 14:49, Nilay Shroff wrote:
>> Add a new debugfs attribute "io_queue_info" to expose per-queue
>> information for NVMe controllers. For NVMe-TCP, this includes the
>> CPU handling each I/O queue and the associated TCP flow (source and
>> destination address/port).
>>
>> This information can be useful for understanding and tuning the
>> interaction between NVMe-TCP I/O queues and network stack components,
>> such as IRQ affinity, RPS/RFS, XPS, or NIC flow steering (ntuple).
>>
>> The data is exported using seq_file interfaces to allow iteration
>> over all controller queues.
> 
> Don't really mind having this. Not sure who will actually go through
> the process of mangling RFS/RPS/XPS for based on this 5-tuple, but ok...

Yeah it may not be used always but I think in performance sensitive
workload, user would like to leverage this information for tuning
I/O stack.

Thanks,
--Nilay


  reply	other threads:[~2026-04-27 12:13 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-20 11:49 [RFC PATCH 0/4] nvme-tcp: NIC topology aware I/O queue scaling and queue info export Nilay Shroff
2026-04-20 11:49 ` [RFC PATCH 1/4] nvme-tcp: optionally limit I/O queue count based on NIC queues Nilay Shroff
2026-04-24 13:46   ` Christoph Hellwig
2026-04-27  7:37     ` Nilay Shroff
2026-04-24 22:10   ` Sagi Grimberg
2026-04-27 11:57     ` Nilay Shroff
2026-04-20 11:49 ` [RFC PATCH 2/4] nvme-tcp: add a diagnostic message when NIC queues are underutilized Nilay Shroff
2026-04-24 22:15   ` Sagi Grimberg
2026-04-27 12:14     ` Nilay Shroff
2026-04-20 11:49 ` [RFC PATCH 3/4] nvme: add debugfs helpers for NVMe drivers Nilay Shroff
2026-04-20 11:49 ` [RFC PATCH 4/4] nvme: expose queue information via debugfs Nilay Shroff
2026-04-24 22:23   ` Sagi Grimberg
2026-04-27 12:12     ` Nilay Shroff [this message]
2026-04-22 11:10 ` [RFC PATCH 0/4] nvme-tcp: NIC topology aware I/O queue scaling and queue info export Hannes Reinecke
2026-04-24 22:30   ` Sagi Grimberg
2026-04-27 12:11     ` Nilay Shroff
2026-04-27  6:13   ` Nilay Shroff

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4d64ea94-9690-4295-8544-52f3c4580e68@linux.ibm.com \
    --to=nilay@linux.ibm.com \
    --cc=chaitanyak@nvidia.com \
    --cc=gjoyce@linux.ibm.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox