From: Zhao Liu <zhao1.liu@intel.com>
To: "Moger, Babu" <babu.moger@amd.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
"Daniel P . Berrangé" <berrange@redhat.com>,
"Igor Mammedov" <imammedo@redhat.com>,
"Ewan Hai" <ewanhai-oc@zhaoxin.com>,
"Xiaoyao Li" <xiaoyao.li@intel.com>, "Tao Su" <tao1.su@intel.com>,
"Yi Lai" <yi1.lai@intel.com>, "Dapeng Mi" <dapeng1.mi@intel.com>,
qemu-devel@nongnu.org
Subject: Re: [PATCH v2 7/7] i386/cpu: Honor maximum value for CPUID.8000001DH.EAX[25:14]
Date: Mon, 14 Jul 2025 23:41:18 +0800 [thread overview]
Message-ID: <aHUlHjzYWUM/ryQy@intel.com> (raw)
In-Reply-To: <d19082cc-6662-4299-89c6-94657ce672f7@amd.com>
On Mon, Jul 14, 2025 at 09:51:25AM -0500, Moger, Babu wrote:
> Date: Mon, 14 Jul 2025 09:51:25 -0500
> From: "Moger, Babu" <babu.moger@amd.com>
> Subject: Re: [PATCH v2 7/7] i386/cpu: Honor maximum value for
> CPUID.8000001DH.EAX[25:14]
>
> Hi Zhao,
>
> On 7/14/25 03:08, Zhao Liu wrote:
> > CPUID.8000001DH:EAX[25:14] is "NumSharingCache", and the number of
> > logical processors sharing this cache is the value of this field
> > incremented by 1. Because of its width limitation, the maximum value
> > currently supported is 4095.
> >
> > Though at present Q35 supports up to 4096 CPUs, by constructing a
> > specific topology, the width of the APIC ID can be extended beyond 12
> > bits. For example, using `-smp threads=33,cores=9,modules=9` results in
> > a die level offset of 6 + 4 + 4 = 14 bits, which can also cause
> > overflow. Check and honor the maximum value as CPUID.04H did.
> >
> > Cc: Babu Moger <babu.moger@amd.com>
> > Signed-off-by: Zhao Liu <zhao1.liu@intel.com>
> > ---
> > Changes Since RFC v1 [*]:
> > * Correct the RFC's description, now there's the overflow case. Provide
> > an overflow example.
> >
> > RFC:
> > * Although there are currently no overflow cases, to avoid any
> > potential issue, add the overflow check, just as I did for Intel.
> >
> > [*]: https://lore.kernel.org/qemu-devel/20250227062523.124601-5-zhao1.liu@intel.com/
> > ---
> > target/i386/cpu.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > index fedeeea151ee..eceda9865b8f 100644
> > --- a/target/i386/cpu.c
> > +++ b/target/i386/cpu.c
> > @@ -558,7 +558,8 @@ static void encode_cache_cpuid8000001d(CPUCacheInfo *cache,
> >
> > *eax = CACHE_TYPE(cache->type) | CACHE_LEVEL(cache->level) |
> > (cache->self_init ? CACHE_SELF_INIT_LEVEL : 0);
> > - *eax |= max_thread_ids_for_cache(topo_info, cache->share_level) << 14;
> > + /* Bits 25:14 - NumSharingCache: maximum 4095. */
> > + *eax |= MIN(max_thread_ids_for_cache(topo_info, cache->share_level), 4095) << 14;
>
> Will this be more meaningful?
>
> *eax |=
> max_thread_ids_for_cache(topo_info, cache->share_level) & 0xFFF << 14
Hi Babu, thank you for your feedback! This approach depends on truncation,
which might lead to more erroneous conclusions. Currently, such cases
shouldn't exist on actual hardware; it's only QEMU that supports so many
CPUs and custom topologies.
Previously, when Intel handled similar cases (where the topology space
wasn't large enough), it would encode the maximum value rather than
truncate, as I'm doing now (you can refer to the description of 0x1 in
patch 5, and similar fixes in Intel's 0x4 leaf in patch 6). In the
future, if actual hardware CPUs reach such numbers and has special
behavior, we can update accordingly. I think at least for now, this
avoids overflow caused by special topology in QEMU emulation.
Thanks,
Zhao
next prev parent reply other threads:[~2025-07-14 16:21 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-14 8:08 [PATCH v2 0/7] i386/cpu: Clean Up Reserved CPUID Leaves & Topology Overflow Fix Zhao Liu
2025-07-14 8:08 ` [PATCH v2 1/7] i386/cpu: Mark EBX/ECX/EDX in CPUID 0x80000000 leaf as reserved for Intel Zhao Liu
2025-07-14 8:15 ` Xiaoyao Li
2025-07-14 8:08 ` [PATCH v2 2/7] i386/cpu: Mark CPUID 0x80000008 ECX bits[0:7] & [12:15] as reserved for Intel/Zhaoxin Zhao Liu
2025-07-14 8:27 ` Xiaoyao Li
2025-07-14 9:23 ` Zhao Liu
2025-07-14 8:08 ` [PATCH v2 3/7] i386/cpu: Reorder CPUID leaves in cpu_x86_cpuid() Zhao Liu
2025-07-14 8:08 ` [PATCH v2 4/7] i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX[23:16] Zhao Liu
2025-07-14 8:29 ` Xiaoyao Li
2025-07-16 15:31 ` Michael Tokarev
2025-07-17 3:06 ` Zhao Liu
2025-07-17 3:25 ` Michael Tokarev
2025-07-17 4:09 ` Zhao Liu
2025-07-14 8:08 ` [PATCH v2 5/7] i386/cpu: Fix cpu number overflow in CPUID.01H.EBX[23:16] Zhao Liu
2025-07-14 8:08 ` [PATCH v2 6/7] i386/cpu: Fix overflow of cache topology fields in CPUID.04H Zhao Liu
2025-07-14 8:08 ` [PATCH v2 7/7] i386/cpu: Honor maximum value for CPUID.8000001DH.EAX[25:14] Zhao Liu
2025-07-14 14:51 ` Moger, Babu
2025-07-14 15:41 ` Zhao Liu [this message]
2025-07-14 15:25 ` Moger, Babu
2025-07-14 8:25 ` [PATCH v2 0/7] i386/cpu: Clean Up Reserved CPUID Leaves & Topology Overflow Fix Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aHUlHjzYWUM/ryQy@intel.com \
--to=zhao1.liu@intel.com \
--cc=babu.moger@amd.com \
--cc=berrange@redhat.com \
--cc=dapeng1.mi@intel.com \
--cc=ewanhai-oc@zhaoxin.com \
--cc=imammedo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tao1.su@intel.com \
--cc=xiaoyao.li@intel.com \
--cc=yi1.lai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).