From: Ming Lei <ming.lei@redhat.com>
To: "Guo, Wangyang" <wangyang.guo@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Thomas Gleixner <tglx@linutronix.de>,
Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@fb.com>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
virtualization@lists.linux-foundation.org,
linux-block@vger.kernel.org, Tianyou Li <tianyou.li@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Dan Liang <dan.liang@intel.com>
Subject: Re: [PATCH RESEND] lib/group_cpus: make group CPU cluster aware
Date: Tue, 11 Nov 2025 20:08:39 +0800 [thread overview]
Message-ID: <aRMnR5DRdsU8lGtU@fedora> (raw)
In-Reply-To: <b94a0d74-0770-4751-9064-2ef077fada14@intel.com>
On Tue, Nov 11, 2025 at 01:31:04PM +0800, Guo, Wangyang wrote:
> On 11/11/2025 11:25 AM, Ming Lei wrote:
> > On Tue, Nov 11, 2025 at 10:06:08AM +0800, Wangyang Guo wrote:
> > > As CPU core counts increase, the number of NVMe IRQs may be smaller than
> > > the total number of CPUs. This forces multiple CPUs to share the same
> > > IRQ. If the IRQ affinity and the CPU’s cluster do not align, a
> > > performance penalty can be observed on some platforms.
> >
> > Can you add details why/how CPU cluster isn't aligned with IRQ
> > affinity? And how performance penalty is caused?
>
> Intel Xeon E platform packs 4 CPU cores as 1 module (cluster) and share the
> L2 cache. Let's say, if there are 40 CPUs in 1 NUMA domain and 11 IRQs to
> dispatch. The existing algorithm will map first 7 IRQs each with 4 CPUs and
> remained 4 IRQs each with 3 CPUs each. The last 4 IRQs may have cross
> cluster issue. For example, the 9th IRQ which pinned to CPU32, then for
> CPU31, it will have cross L2 memory access.
CPUs sharing L2 usually have small number, and it is common to see one queue
mapping includes CPUs from different L2.
So how much does crossing L2 hurt IO perf?
They still should share same L3 cache, and cpus_share_cache() should be
true when the IO completes on the CPU which belong to different L2 with the
submission CPU, and remote completion via IPI won't be triggered.
From my observation, remote completion does hurt NVMe IO perf very much,
for example, AMD's crossing L3 mapping.
Thanks,
Ming
next prev parent reply other threads:[~2025-11-11 12:09 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-11 2:06 [PATCH RESEND] lib/group_cpus: make group CPU cluster aware Wangyang Guo
2025-11-11 3:25 ` Ming Lei
2025-11-11 5:31 ` Guo, Wangyang
2025-11-11 12:08 ` Ming Lei [this message]
2025-11-12 3:02 ` Guo, Wangyang
2025-11-13 1:38 ` Ming Lei
2025-11-13 3:32 ` Guo, Wangyang
2025-11-18 6:29 ` Guo, Wangyang
2025-11-19 1:52 ` Ming Lei
2025-11-24 7:58 ` Guo, Wangyang
2025-12-08 2:47 ` Guo, Wangyang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aRMnR5DRdsU8lGtU@fedora \
--to=ming.lei@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=axboe@fb.com \
--cc=dan.liang@intel.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=tglx@linutronix.de \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@linux.intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=wangyang.guo@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).