public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: "brookxu.cn" <brookxu.cn@gmail.com>
Cc: kbusch@kernel.org, axboe@fb.com, hch@lst.de, sagi@grimberg.me,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
	linux-pci@vger.kernel.org, tglx@linutronix.de,
	frederic@kernel.org
Subject: Re: [RFC PATCH] nvme-pci: allowed to modify IRQ affinity in latency sensitive scenarios
Date: Sat, 23 Apr 2022 07:43:31 +0200	[thread overview]
Message-ID: <20220423054331.GA17823@lst.de> (raw)
In-Reply-To: <1650625106-30272-1-git-send-email-brookxu.cn@gmail.com>

On Fri, Apr 22, 2022 at 06:58:26PM +0800, brookxu.cn wrote:
> From: Chunguang Xu <brookxu@tencent.com>
> 
> In most cases, setting the affinity through managed IRQ is a better
> choice. But in some scenarios that use isolcpus, such as DPDK, because
> managed IRQ does not distinguish between housekeeping CPU and isolated
> CPU when selecting CPU, this will cause IO interrupts triggered by
> housekeeping CPU to be routed to isolated CPU, which will affect the
> tasks running on isolated CPU. commit 11ea68f553e2 ("genirq,
> sched/isolation: Isolate from handling managed interrupts") tries to
> fix this in a best effort way. However, in a real production environment,
> latency-sensitive business needs more of a deterministic result. So,
> similar to the mpt3sas driver, we might can add a module parameter
> smp_affinity_enable to the Nvme driver.

This kind of boilerplate code in random drivers is not sustainable.

I really think we need to handle this whole housekeeping CPU case in
common code.  That is designed CPUs as housekeeping vs non-housekeeping
and let the generic affinity assignment code deal with it and solve
it for all drivers using the proper affinity masks instead of having
random slighty overrides in all drivers anyone ever wants to use in
such a system.

      reply	other threads:[~2022-04-23  5:43 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-22 10:58 [RFC PATCH] nvme-pci: allowed to modify IRQ affinity in latency sensitive scenarios brookxu.cn
2022-04-23  5:43 ` Christoph Hellwig [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220423054331.GA17823@lst.de \
    --to=hch@lst.de \
    --cc=axboe@fb.com \
    --cc=brookxu.cn@gmail.com \
    --cc=frederic@kernel.org \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=sagi@grimberg.me \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox