From: "Guo, Wangyang" <wangyang.guo@intel.com>
To: Radu Rendec <rrendec@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
linux-kernel@vger.kernel.org, Tianyou Li <tianyou.li@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Dan Liang <dan.liang@intel.com>
Subject: Re: [PATCH] lib/group_cpus: make group CPU cluster aware
Date: Sat, 10 Jan 2026 10:24:31 +0800 [thread overview]
Message-ID: <8ba50768-2f05-40a8-b8e8-4364f33ad269@intel.com> (raw)
In-Reply-To: <b4a61e3d17db7666a2b523fc57fdbb9356eb5191.camel@redhat.com>
On 1/10/2026 3:13 AM, Radu Rendec wrote:
> Hi all,
>
> On Mon, 2025-12-22 at 11:03 +0800, Guo, Wangyang wrote:
>> On 12/22/2025 3:10 AM, Andrew Morton wrote:
>>> On Fri, 24 Oct 2025 10:30:38 +0800 Wangyang Guo <wangyang.guo@intel.com> wrote:
>>>
>>>> As CPU core counts increase, the number of NVMe IRQs may be smaller than
>>>> the total number of CPUs. This forces multiple CPUs to share the same
>>>> IRQ. If the IRQ affinity and the CPU’s cluster do not align, a
>>>> performance penalty can be observed on some platforms.
>>>
>>> It would be helpful to quantify "performance penalty". At least give
>>> readers some approximate understanding of how serious this issue is,
>>> please.
>>>
>> Thanks for your reminder, will update changelog in next version. We see
>> 15%+ performance difference in FIO libaio/randread/bs=8k.
>>
>>>> This patch improves IRQ affinity by grouping CPUs by cluster within each
>>>> NUMA domain, ensuring better locality between CPUs and their assigned
>>>> NVMe IRQs.
>>>>
>>>> Reviewed-by: Tianyou Li <tianyou.li@intel.com>
>>>> Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
>>>> Tested-by: Dan Liang <dan.liang@intel.com>
>>>> Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
>>>
>>> Patch hasn't attracted additional review so I'll queue this version for
>>> some testing in mm.git's mm-nonmm-unstable branch. I'll add a
>>> note-to-self that a changelog addition is desirable.
>>
>>
>> Thanks a lot for your time and support! Please let me know if you have
>> any further comments or guidance. Any feedback would be appreciated.
>
> With this patch applied, I see a weird issue in a qemu x86_64 vm if I
> start it with a higher number of max CPUs than active CPUs, for example
> `-smp 4,maxcpus=8` on the qemu command line.
>
> What I see is the `while (1)` loop in alloc_cluster_groups() spinning
> forever. Removing the `maxcpus=8` from the qemu command line fixes the
> issue but so does reverting the patch :)
Thanks for the reporting. I will investigate this problem.
BR
Wangyang
next prev parent reply other threads:[~2026-01-10 2:24 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-24 2:30 [PATCH] lib/group_cpus: make group CPU cluster aware Wangyang Guo
2025-12-21 19:10 ` Andrew Morton
2025-12-22 3:03 ` Guo, Wangyang
2026-01-09 19:13 ` Radu Rendec
2026-01-09 22:47 ` Andrew Morton
2026-01-13 2:37 ` Guo, Wangyang
2026-01-10 2:24 ` Guo, Wangyang [this message]
2026-01-13 1:59 ` Guo, Wangyang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8ba50768-2f05-40a8-b8e8-4364f33ad269@intel.com \
--to=wangyang.guo@intel.com \
--cc=akpm@linux-foundation.org \
--cc=dan.liang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=rrendec@redhat.com \
--cc=tglx@linutronix.de \
--cc=tianyou.li@intel.com \
--cc=tim.c.chen@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox