From: Ming Lei <ming.lei@redhat.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: Hannes Reinecke <hare@suse.de>, Christoph Hellwig <hch@lst.de>,
Jens Axboe <axboe@kernel.dk>, Keith Busch <kbusch@kernel.org>,
Sagi Grimberg <sagi@grimberg.me>,
Frederic Weisbecker <fweisbecker@suse.com>,
Mel Gorman <mgorman@suse.de>,
Sridhar Balaraman <sbalaraman@parallelwireless.com>,
"brookxu.cn" <brookxu.cn@gmail.com>,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
linux-nvme@lists.infradead.org,
Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH 1/3] sched/isolation: Add io_queue housekeeping option
Date: Sun, 30 Jun 2024 21:47:51 +0800 [thread overview]
Message-ID: <ZoFiB3wDIBbFGONn@fedora> (raw)
In-Reply-To: <rk2ywo6y3ppki2gfogter2p2p2b556kmawqsuqrif3xcalsc2m@aosprmhypcav>
On Tue, Jun 25, 2024 at 10:57:42AM +0200, Daniel Wagner wrote:
> On Tue, Jun 25, 2024 at 09:07:30AM GMT, Thomas Gleixner wrote:
> > On Tue, Jun 25 2024 at 08:37, Hannes Reinecke wrote:
> > > On 6/24/24 11:00, Daniel Wagner wrote:
> > >> On Mon, Jun 24, 2024 at 10:47:05AM GMT, Christoph Hellwig wrote:
> > >>>> Do you think we should introduce a new type or just use the existing
> > >>>> managed_irq for this?
> > >>>
> > >>> No idea really. What was the reason for adding a new one?
> > >>
> > >> I've added the new type so that the current behavior of spreading the
> > >> queues over to the isolated CPUs is still possible. I don't know if this
> > >> a valid use case or not. I just didn't wanted to kill this feature it
> > >> without having discussed it before.
> > >>
> > >> But if we agree this doesn't really makes sense with isolcpus, then I
> > >> think we should use the managed_irq one as nvme-pci is using the managed
> > >> IRQ API.
> > >>
> > > I'm in favour in expanding/modifying the managed irq case.
> > > For managed irqs the driver will be running on the housekeeping CPUs
> > > only, and has no way of even installing irq handlers for the isolcpus.
> >
> > Yes, that's preferred, but please double check with the people who
> > introduced that in the first place.
>
> The relevant code was added by Ming:
>
> 11ea68f553e2 ("genirq, sched/isolation: Isolate from handling managed
> interrupts")
>
> [...] it can happen that a managed interrupt whose affinity
> mask contains both isolated and housekeeping CPUs is routed to an isolated
> CPU. As a consequence IO submitted on a housekeeping CPU causes interrupts
> on the isolated CPU.
>
> Add a new sub-parameter 'managed_irq' for 'isolcpus' and the corresponding
> logic in the interrupt affinity selection code.
>
> The subparameter indicates to the interrupt affinity selection logic that
> it should try to avoid the above scenario.
> [...]
>
> From the commit message I read the original indent is that managed_irq
> should avoid speading queues on isolcated CPUs.
>
> Ming, do you agree to use the managed_irq mask to limit the queue
> spreading on isolated CPUs? It would make the io_queue option obsolete.
Yes, managed_irq is introduced for not spreading on isolated CPUs, and
it is supposed to work well.
The only problem of managed_irq is just that isolated CPUs are
spread, but they are excluded from irq effective masks.
Thanks,
Ming
next prev parent reply other threads:[~2024-06-30 13:48 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-21 13:53 [PATCH 0/3] nvme-pci: honor isolcpus configuration Daniel Wagner
2024-06-21 13:53 ` [PATCH 1/3] sched/isolation: Add io_queue housekeeping option Daniel Wagner
2024-06-22 5:11 ` Christoph Hellwig
2024-06-24 7:13 ` Daniel Wagner
2024-06-24 8:47 ` Christoph Hellwig
2024-06-24 9:00 ` Daniel Wagner
2024-06-25 6:37 ` Hannes Reinecke
2024-06-25 7:07 ` Thomas Gleixner
2024-06-25 8:57 ` Daniel Wagner
2024-06-30 13:47 ` Ming Lei [this message]
2024-06-21 13:53 ` [PATCH 2/3] nvme-pci: limit queue count to housekeeping cpus Daniel Wagner
2024-06-22 5:14 ` Christoph Hellwig
2024-06-23 7:03 ` Sagi Grimberg
2024-06-21 13:53 ` [PATCH 3/3] lib/group_cpus.c: honor housekeeping config when grouping CPUs Daniel Wagner
2024-06-22 5:13 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZoFiB3wDIBbFGONn@fedora \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=brookxu.cn@gmail.com \
--cc=dwagner@suse.de \
--cc=fweisbecker@suse.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mgorman@suse.de \
--cc=sagi@grimberg.me \
--cc=sbalaraman@parallelwireless.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox