public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Benjamin Meier <benjamin.meier70@gmail.com>
To: ming.lei@redhat.com
Cc: benjamin.meier70@gmail.com, hch@lst.de, kbusch@kernel.org,
	kbusch@meta.com, linux-kernel@vger.kernel.org,
	linux-nvme@lists.infradead.org, tglx@linutronix.de
Subject: Re: [PATCH 2/2] nvme-pci: allow unmanaged interrupts
Date: Mon, 13 May 2024 10:59:02 +0200	[thread overview]
Message-ID: <0ed958b4-cbc9-4136-9113-e7a43a3f91e6@gmail.com> (raw)
In-Reply-To: <ZkHR1L/cJesDEn60@fedora>

 > > The application which we develop and maintain (in the company I work)
 > > has very high requirements regarding latency. We have some isolated 
cores
 >
 > Are these isolated cores controlled by kernel command line `isolcpus=`?

Yes, exactly.

 > > and we run our application on those.
 > >
 > > Our system is using kernel 5.4 which unfortunately does not support
 > > "isolcpus=managed_irq". Actually, we did not even know about that
 > > option, because we are focussed on kernel 5.4. It solves part
 > > of our problem, but being able to specify where exactly interrupts
 > > are running is still superior in our opinion.
 > >
 > > E.g. assume the number of house-keeping cores is small, because we
 > > want to have full control over the system. In our case we have threads
 > > of different priorities where some get an exclusive core. Some 
other threads
 > > share a core (or a group of cores) with other threads. Now we are still
 > > happy to assign some interrupts to some of the cores which we 
consider as
 > > "medium-priority". Due to the small number of non-isolated cores, 
it can
 >
 > So these "medium-priority" cores belong to isolated cpu list, you 
still expect
 > NVMe interrupts can be handled on these cpu cores, do I understand 
correctly?

We want to avoid that the NVMe interrupts are on the "high priority" 
cores. Having
noise on them is quite bad for us, so we wanted to move some interrupts 
to house
keeping cores and if needed (due to performance issues) keep some on those
"medium-priority" isolated cores. NVMe is not that highest priority for us,
but possibly running too much on the house-keeping cores could also be bad.

 > If yes, I think your case still can be covered with 
'isolcpus=managed_irq' which
 > needn't to be same with cpu cores specified from `isolcpus=`, such as
 > excluding medium-priority cores from 'isolcpus=managed_irq', and
 > meantime include them in plain `isolcpus=`.

Unfortunately, our kernel version (5.4) does not support "managed_irq" 
and due
to that we're happy with the patch. However, I see that for newer kernel 
versions
the already existing arguments could be sufficient to do everything.

 > > be tricky to assign all interrupts to those without a 
performance-penalty.
 > >
 > > Given these requirements, manually specifying interrupt/core 
assignments
 > > would offer greater flexibility and control over system performance.
 > > Moreover, the proposed code changes appear minimal and have no
 > > impact on existing functionalities.
 >
 > Looks your main concern is performance, but as Keith mentioned, the 
proposed
 > change may degrade nvme perf too:
 >
 > 
https://lore.kernel.org/linux-nvme/Zj6745UDnwX1BteO@kbusch-mbp.dhcp.thefacebook.com/

Yes, but for NVMe it's not that critical. The most important point for us is
to keep them away from our "high-priority" cores. We still wanted to 
have control
where we run those interrupts, but also because we just did not know the 
"managed_irq"
option.

Thanks,
Benjamin

  reply	other threads:[~2024-05-13  8:59 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-10 14:14 [PATCH 1/2] genirq/affinity: remove rsvd check against minvec Keith Busch
2024-05-10 14:14 ` [PATCH 2/2] nvme-pci: allow unmanaged interrupts Keith Busch
2024-05-10 15:10   ` Christoph Hellwig
2024-05-10 16:20     ` Keith Busch
2024-05-10 23:50       ` Ming Lei
2024-05-11  0:41         ` Keith Busch
2024-05-11  0:59           ` Ming Lei
2024-05-12  6:35           ` Thomas Gleixner
2024-05-20 15:37             ` Christoph Hellwig
2024-05-20 20:34               ` Thomas Gleixner
2024-05-21  2:31               ` Ming Lei
2024-05-21  8:38                 ` Thomas Gleixner
2024-05-21 10:06                   ` Frederic Weisbecker
2024-05-13  7:33     ` Benjamin Meier
2024-05-13  8:39       ` Ming Lei
2024-05-13  8:59         ` Benjamin Meier [this message]
2024-05-13  9:25           ` Ming Lei
2024-05-13 12:33             ` Benjamin Meier
2024-05-13 13:12     ` Bart Van Assche
2024-05-10 15:15 ` [PATCH 1/2] genirq/affinity: remove rsvd check against minvec Ming Lei
2024-05-10 16:47   ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0ed958b4-cbc9-4136-9113-e7a43a3f91e6@gmail.com \
    --to=benjamin.meier70@gmail.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kbusch@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=ming.lei@redhat.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox