From: Like Xu <like.xu@linux.intel.com>
To: "Daniel P. Berrangé" <berrange@redhat.com>,
"Eduardo Habkost" <ehabkost@redhat.com>
Cc: drjones@redhat.com,
Peter Crosthwaite <crosthwaite.peter@gmail.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
like.xu@intel.com, Marcelo Tosatti <mtosatti@redhat.com>,
qemu-devel@nongnu.org, imammedo@redhat.com,
Paolo Bonzini <pbonzini@redhat.com>,
Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [PATCH v1 2/5] vl.c: add -smp, dies=* command line support
Date: Thu, 17 Jan 2019 09:18:29 +0800 [thread overview]
Message-ID: <b2ec9b01-2b16-5822-9e6f-9d1c4dd1cd64@linux.intel.com> (raw)
In-Reply-To: <20190116182606.GH20275@redhat.com>
On 2019/1/17 2:26, Daniel P. Berrangé wrote:
> On Mon, Jan 14, 2019 at 06:51:34PM -0200, Eduardo Habkost wrote:
>> On Mon, Jan 14, 2019 at 08:24:56PM +0800, Like Xu wrote:
>>> This patch updates the check rules on legeacy -smp parse from user command
>>> and it's designed to obey the same restrictions as socket/core/thread model.
>>>
>>> Signed-off-by: Like Xu <like.xu@linux.intel.com>
>>
>> This would require the documentation for -smp to be updated.
>> qemu-options.hx still says that "cores=" is the number of cores
>> per socket.
>>
>> Also, I'm not completely sure we should change the meaning of
>> "cores=" and smp_cores to be per-die instead of per-socket. Most
>> machines won't have any code for tracking dies, so we probably
>> shouldn't make the extra complexity affect all machines.[1]
>
> Could we not simply have a 'max-dies' property against the machine
> base class which defaults to 1. Then no existing machine types
> need any changes unless they want to opt-in to supporting
> "dies > 1".
It's nice to have max-dies for machine base class.
>
>> What would be the disadvantages of a simple -machine
>> "dies-per-socket" option, specific for PC?
>
> Libvirt currently has
>
> <cpu>
> <topology sockets='1' cores='2' threads='1'/>
> </cpu>
>
> To me the natural way to expand that is to use
>
> <cpu>
> <topology sockets='1' dies='2' cores='2' threads='1'/>
> </cpu>
>
> but this rather implies dies-per-socket + cores-per-die
> not cores-per-socket. Libvirt could of course convert
> its value from cores-per-die into cores-per-socket
> before giving it to QEMU, albeit with the potential
> for confusion from people comparing the libvirt and QEMU
> level configs
It is a recommended update on cpu topo configuration of libvirt
as well as other upper layer apps.
>
>> Keeping core-id and smp_cores per-socket instead of per-die also
>> seems necessary to keep backwards compatibility on the interface
>> for identifying CPU hotplug slots. Igor, what do you think?
>
> Is there really a backwards compatibility problem, given that
> no existing mgmt app will have created a VM with "dies != 1".
> IOW, if an application adds logic to support configuring a
> VM with "dies > 1" it seems fine that they should need to
> understand how this impacts the way you identify CPUs for
> hotplug.
The impacts from MCP model will be documented continuously.
Any concerns for hot-plugging CPUs in MCP socket is welcomed.
>
>> [1] I would even argue that the rest of the -smp options belong
>> to the machine object, and topology rules should be
>> machine-specific, but cleaning this up will require
>> additional work.
>
> If we ever expect to support non-homogenous CPUs then our
> modelling of topology is fatally flawed, as it doesm't allow
> us to specify creating a VM with 1 socket containing 2
> cores and a second socket containing 4 cores. Fixing that
> might require modelling each socket, die, and core as a
> distinct set of nested QOM objects which gets real fun.
Do we really need to go out of this non-homogeneous step?
Currently there is no support on physical host AFAIK.
Is there enough benefit?
>
>
> Regards,
> Daniel
>
next prev parent reply other threads:[~2019-01-17 1:19 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-14 12:24 [Qemu-devel] [PATCH v1 0/5] Introduce cpu die topology and enable CPUID.1F for i386 Like Xu
2019-01-14 12:24 ` [Qemu-devel] [PATCH v1 1/5] cpu: introduce die, the new cpu toppolgy emulation level Like Xu
2019-01-14 20:08 ` Eric Blake
2019-01-15 1:34 ` Xu, Like
2019-01-14 12:24 ` [Qemu-devel] [PATCH v1 2/5] vl.c: add -smp, dies=* command line support Like Xu
2019-01-14 20:51 ` Eduardo Habkost
2019-01-15 3:58 ` Xu, Like
2019-01-16 18:26 ` Daniel P. Berrangé
2019-01-17 1:18 ` Like Xu [this message]
2019-01-17 9:53 ` Daniel P. Berrangé
2019-01-14 12:24 ` [Qemu-devel] [PATCH v1 3/5] i386: extend x86_apicid_* functions for smp_dies support Like Xu
2019-01-14 12:24 ` [Qemu-devel] [PATCH v1 4/5] i386: enable CPUID.1F leaf generation based on spec Like Xu
2019-01-14 12:24 ` [Qemu-devel] [PATCH v1 5/5] i386: add CPUID.1F to cpuid_data with host_cpuid check Like Xu
2019-01-17 14:24 ` [Qemu-devel] [PATCH v1 0/5] Introduce cpu die topology and enable CPUID.1F for i386 Igor Mammedov
2019-01-17 14:51 ` Like Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b2ec9b01-2b16-5822-9e6f-9d1c4dd1cd64@linux.intel.com \
--to=like.xu@linux.intel.com \
--cc=berrange@redhat.com \
--cc=crosthwaite.peter@gmail.com \
--cc=drjones@redhat.com \
--cc=ehabkost@redhat.com \
--cc=imammedo@redhat.com \
--cc=like.xu@intel.com \
--cc=mst@redhat.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).