From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Zhao Liu <zhao1.liu@intel.com>
Cc: qemu-devel@nongnu.org, "Thomas Huth" <thuth@redhat.com>,
devel@lists.libvirt.org,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Eduardo Habkost" <eduardo@habkost.net>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Peter Krempa" <pkrempa@redhat.com>,
"Yanan Wang" <wangyanan55@huawei.com>,
"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine
Date: Wed, 15 May 2024 18:06:56 +0100 [thread overview]
Message-ID: <ZkTrsDdyGRFzVULG@redhat.com> (raw)
In-Reply-To: <ZkLfVB/1arSlAptC@intel.com>
On Tue, May 14, 2024 at 11:49:40AM +0800, Zhao Liu wrote:
> > I'm failing to see what real world technical problems QEMU faces
> > with a parameter being set to '1' by a mgmt app, when QEMU itself
> > treats all omitted values as being '1' anyway.
> >
> > If we're trying to faithfully model the real world, then restricting
> > the topology against machine types though still looks inherantly wrong.
> > The valid topology ought to be constrained based on the named CPU model.
> > eg it doesn't make sense to allow 'dies=4' with a Skylake CPU model,
> > only an EPYC CPU model, especially if we want to model cache info in
> > a way that matches the real world silicon better.
>
> Thanks for figuring out this. This issue is related with Intel CPU
> cache model: currently Intel code defaults L3 shared at die level.
> This could be resolved by defining the accurate default cache topology
> level for CPU model and make Intel CPU models share L3 at package level
> except only Cascadelake.
>
> Then user could define any other topology levels (die/module) for
> Icelake and this won't change the cache topology, unless the user adds
> more sockets or further customizes the cache topology in another way [1].
> Do you agree with this solution?
Broadly speaking yes. Historically we have created trouble for
ourselves (and or our users) by allowing creation of "wierd"
guest CPU models, which don't resemble those which can be found
in real world silicon. Problems specifically have been around
unsual combinations of CPUID features eg user enabled X, but not Y,
where real silicon always has X + Y enabled, and guest OS assumed
this is always the case.
So if our named CPU models can more faithfully match what you might
see in terms of cache topology in the real world, that's likely to
be a good thing.
> > As above, I think that restrictions based on machine type, while nice and
> > simple, are incorrect long term. If we did impose restrictions based on
> > CPU model, then we could trivially expose this info to mgmt apps via the
> > existing mechanism for querying supported CPU models. Limiting based on
> > CPU model, however, has potentially greater back compat issues, though
> > it would be strictly more faithful to hardware.
>
> I think as long as the default cache topology model is clearly defined,
> users can further customize the CPU topology and adjust the cache
> topology based on it. After all, topology is architectural, not CPU
> model-specific (linux support for topology does not take into account
> specific CPU models).
>
> For example, x86, for simplicity, can we assume that all x86 CPU models
> support all x86 topology levels (thread/core/module/die/package) without
> making distinctions based on specific CPU models?
Hmm, true, if we have direct control over cache topology, the
CPU topology is less critical. I'd still be wary of suggesting
it is a good idea to use CPU topology configs that don't reflect
something the CPU vendor has concievably used in real silicon.
> That way as long as the user doesn't change the default topology, then
> Guest's cache and other topology information won't be "corrupted".
> And there's one more question, does this rollback mean that smp's
> parameters must have compatible default values for all architectures?
Historically we preferred "sockets", when filling missing topology,
then more recently we switched to prefer "cores", since high core
counts are generally more common in real world than high socket
counts.
In theory at some point, one might want to fill in 'dies > 0' for
EPYC, or 'modules > 0' for appropriate Intel CPU models, but doing
the reverse while theoretically valid, would be wierd as no such
topology would exist in real silicon.
Ultimately if you're allowing QEMU guest vCPUs threads to float
freely across host CPUs, there is little point in setting dies/
modules/threads to a value other than 1, because the guest OS
won't benefit from understanding cache differences for dies/
modules/threads/etc, if the vCPU can be moved between host CPUs
at any time by the host OS scheduler.
Fine grained control over dies/modules/threads only makes more
sense if you have strictly pinning vCPU threads 1:1 to host CPUs
IOW, simply preferring "cores" for everything is a reasonable
default long term plan for everything, unless the specific
architecture target has no concept of "cores".
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2024-05-15 17:07 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-13 12:33 [PATCH 0/2] hw/core: revert deprecation of 'parameter=1' for SMP topology Daniel P. Berrangé
2024-05-13 12:33 ` [PATCH 1/2] hw/core: allow parameter=1 for SMP topology on any machine Daniel P. Berrangé
2024-05-13 14:22 ` Zhao Liu
2024-05-13 14:39 ` Daniel P. Berrangé
2024-05-14 3:49 ` Zhao Liu
2024-05-15 17:06 ` Daniel P. Berrangé [this message]
2024-05-16 8:47 ` Zhao Liu
2024-05-16 8:54 ` Zhao Liu
2024-05-13 12:33 ` [PATCH 2/2] tests: add testing of parameter=1 for SMP topology Daniel P. Berrangé
2024-05-16 2:57 ` Xiaoyao Li
2024-05-16 8:59 ` Zhao Liu
2024-05-13 13:53 ` [PATCH 0/2] hw/core: revert deprecation of 'parameter=1' " Ján Tomko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZkTrsDdyGRFzVULG@redhat.com \
--to=berrange@redhat.com \
--cc=devel@lists.libvirt.org \
--cc=eduardo@habkost.net \
--cc=marcel.apfelbaum@gmail.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=pkrempa@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
--cc=wangyanan55@huawei.com \
--cc=zhao1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).