From: Tero Kristo <tero.kristo@linux.intel.com>
To: Christoph Hellwig <hch@lst.de>
Cc: linux-kernel@vger.kernel.org, axboe@kernel.dk,
linux-nvme@lists.infradead.org, sagi@grimberg.me,
kbusch@kernel.org
Subject: Re: [PATCH 1/1] nvme-pci: Add CPU latency pm-qos handling
Date: Fri, 18 Oct 2024 10:58:36 +0300 [thread overview]
Message-ID: <316b41e631572a02a89bab2456cfb373f3e667ae.camel@linux.intel.com> (raw)
In-Reply-To: <20241015132928.GA3961@lst.de>
On Tue, 2024-10-15 at 15:29 +0200, Christoph Hellwig wrote:
> On Tue, Oct 15, 2024 at 12:25:37PM +0300, Tero Kristo wrote:
> > I've been giving this some thought offline, but can't really think
> > of
> > how this could be done in the generic layers; the code needs to
> > figure
> > out the interrupt that gets fired by the activity, to prevent the
> > CPU
> > that is going to handle that interrupt to go into deep idle,
> > potentially ruining the latency and throughput of the request. The
> > knowledge of this interrupt mapping only resides in the driver
> > level,
> > in this case NVMe.
> >
> > One thing that could be done is to prevent the whole feature to be
> > used
> > on setups where the number of cpus per irq is above some threshold;
> > lets say 4 as an example.
>
> As a disclaimer I don't really understand the PM QOS framework, just
> the NVMe driver and block layer.
>
> With that my gut feeling is that all this latency management should
> be driven by the blk_mq_hctx structure, the block layer equivalent
> to a queue. And instead of having a per-cpu array of QOS requests
> per device, there should one per cpu in the actual mask of the
> hctx, so that you only have to iterate this local shared data
> structure.
>
> Preferably there would be one single active check per hctx and
> not one per cpu, e.g. when the block layer submits commands
> it has to do one single check instead of an iteration. Similarly
> the block layer code would time out the activity once per hctx,
> and only then iterate the (usually few) CPUs per hctx.
>
Thanks for the feedback, I have now reworked + retested my patches
against blk-mq, just posted them to the block mailing list also.
-Tero
prev parent reply other threads:[~2024-10-18 7:58 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-04 10:09 [PATCH 0/1] nvme-pci: Add CPU latency pm-qos handling Tero Kristo
2024-10-04 10:09 ` [PATCH 1/1] " Tero Kristo
2024-10-07 6:19 ` Christoph Hellwig
2024-10-09 6:45 ` Tero Kristo
2024-10-09 8:00 ` Christoph Hellwig
2024-10-09 8:24 ` Tero Kristo
2024-10-15 9:25 ` Tero Kristo
2024-10-15 13:29 ` Christoph Hellwig
2024-10-18 7:58 ` Tero Kristo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=316b41e631572a02a89bab2456cfb373f3e667ae.camel@linux.intel.com \
--to=tero.kristo@linux.intel.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox