From: Igor Mammedov <imammedo@redhat.com>
To: David Gibson <dgibson@redhat.com>
Cc: Peter Krempa <pkrempa@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC PATCH 1/2] qapi: Add vcpu id to query-hotpluggable-cpus output
Date: Fri, 8 Jul 2016 13:40:38 +0200 [thread overview]
Message-ID: <20160708134038.36d18a6c@nial.brq.redhat.com> (raw)
In-Reply-To: <20160708121855.36e0702d@voom.fritz.box>
On Fri, 8 Jul 2016 12:18:55 +1000
David Gibson <dgibson@redhat.com> wrote:
> On Thu, 7 Jul 2016 17:17:13 +0200
> Peter Krempa <pkrempa@redhat.com> wrote:
>
> > Add 'vcpu index' to the output of query hotpluggable cpus. This output
> > is identical to the linear cpu index taken by the 'cpus' attribute
> > passed to -numa.
>
>
> The problem is, the vcpu index of what? Each entry in the hotpluggable
> cpus table could represent more than one vcpu.
agreed,
-numa cpus should take socket/core/thread-ids instead so that mgmt
could do layout at start-up time
for example if mgmt specifies
-smp cpus=1,sockets=2,cores=2,maxcpus=4
it knows socket/core layout and can assign them as desired
-numa nodeid=0,cpu=[socket-id=0,core-id=0] \
-numa nodeid=1,cpu=[socket-id=0,core-id=1] \
-numa nodeid=2,cpu=[socket-id=1]
that of cause assuming that QEMU would guarantee IDs are are ranges [0..sockets), ...
it's so for x86, can we guarantee it for spapr?
>
> > This will allow to reliably map the cpu number to a given topology
> > element without making mgmt apps to reimplement the mapping.
> >
> > Signed-off-by: Peter Krempa <pkrempa@redhat.com>
> > ---
> > hmp.c | 1 +
> > hw/i386/pc.c | 1 +
> > hw/ppc/spapr.c | 1 +
> > qapi-schema.json | 2 ++
> > 4 files changed, 5 insertions(+)
> >
> > diff --git a/hmp.c b/hmp.c
> > index 0cf5baa..613601e 100644
> > --- a/hmp.c
> > +++ b/hmp.c
> > @@ -2450,6 +2450,7 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
> > monitor_printf(mon, " type: \"%s\"\n", l->value->type);
> > monitor_printf(mon, " vcpus_count: \"%" PRIu64 "\"\n",
> > l->value->vcpus_count);
> > + monitor_printf(mon, " vcpu_id: \"%" PRIu64 "\"\n", l->value->vcpu_id);
> > if (l->value->has_qom_path) {
> > monitor_printf(mon, " qom_path: \"%s\"\n", l->value->qom_path);
> > }
> > diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> > index f293a0c..4ba02c4 100644
> > --- a/hw/i386/pc.c
> > +++ b/hw/i386/pc.c
> > @@ -2131,6 +2131,7 @@ static HotpluggableCPUList *pc_query_hotpluggable_cpus(MachineState *machine)
> >
> > cpu_item->type = g_strdup(cpu_type);
> > cpu_item->vcpus_count = 1;
> > + cpu_item->vcpu_id = i;
> > cpu_props->has_socket_id = true;
> > cpu_props->socket_id = topo.pkg_id;
> > cpu_props->has_core_id = true;
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 7f33a1b..d1f5195 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -2378,6 +2378,7 @@ static HotpluggableCPUList *spapr_query_hotpluggable_cpus(MachineState *machine)
> >
> > cpu_item->type = spapr_get_cpu_core_type(machine->cpu_model);
> > cpu_item->vcpus_count = smp_threads;
> > + cpu_item->vcpu_id = i;
>
> This is wrong. This is the index of the core. The individual vcpus
> within the core will have ids starting at core_id and working up.
>
> > cpu_props->has_core_id = true;
> > cpu_props->core_id = i * smt;
> > /* TODO: add 'has_node/node' here to describe
> > diff --git a/qapi-schema.json b/qapi-schema.json
> > index ba3bf14..6db9294 100644
> > --- a/qapi-schema.json
> > +++ b/qapi-schema.json
> > @@ -4292,6 +4292,7 @@
> > # @type: CPU object type for usage with device_add command
> > # @props: list of properties to be used for hotplugging CPU
> > # @vcpus-count: number of logical VCPU threads @HotpluggableCPU provides
> > +# @vcpu-id: linear index of the vcpu
> > # @qom-path: #optional link to existing CPU object if CPU is present or
> > # omitted if CPU is not present.
> > #
> > @@ -4300,6 +4301,7 @@
> > { 'struct': 'HotpluggableCPU',
> > 'data': { 'type': 'str',
> > 'vcpus-count': 'int',
> > + 'vcpu-id': 'int',
> > 'props': 'CpuInstanceProperties',
> > '*qom-path': 'str'
> > }
> > --
> > 2.9.0
> >
>
>
next prev parent reply other threads:[~2016-07-08 11:40 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-07 15:17 [Qemu-devel] [RFC PATCH 0/2] cpu hotplug: Extend data provided by query-hotpluggable-cpus Peter Krempa
2016-07-07 15:17 ` [Qemu-devel] [RFC PATCH 1/2] qapi: Add vcpu id to query-hotpluggable-cpus output Peter Krempa
2016-07-08 2:18 ` David Gibson
2016-07-08 11:40 ` Igor Mammedov [this message]
2016-07-11 3:30 ` David Gibson
2016-07-11 8:23 ` Igor Mammedov
2016-07-07 15:17 ` [Qemu-devel] [RFC PATCH 2/2] numa: Add node_id data in query-hotpluggable-cpus Peter Krempa
2016-07-07 16:10 ` Andrew Jones
2016-07-08 2:23 ` David Gibson
2016-07-08 7:46 ` Peter Krempa
2016-07-08 12:06 ` Igor Mammedov
2016-07-08 12:26 ` Peter Krempa
2016-07-12 3:27 ` David Gibson
2016-07-08 11:54 ` Igor Mammedov
2016-07-08 12:04 ` Peter Krempa
2016-07-08 12:10 ` Igor Mammedov
2016-07-08 12:53 ` Peter Krempa
2016-07-07 15:24 ` [Qemu-devel] [RFC PATCH 0/2] cpu hotplug: Extend data provided by query-hotpluggable-cpus Peter Krempa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160708134038.36d18a6c@nial.brq.redhat.com \
--to=imammedo@redhat.com \
--cc=dgibson@redhat.com \
--cc=pkrempa@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).