* [Qemu-devel] CPU hotplug, again
@ 2016-02-23 5:24 David Gibson
2016-02-23 9:40 ` Bharata B Rao
0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2016-02-23 5:24 UTC (permalink / raw)
To: Andreas Färber
Cc: thuth, ehabkost, armbru, bharata, agraf, qemu-devel, qemu-ppc,
pbonzini, imammedo
[-- Attachment #1: Type: text/plain, Size: 1674 bytes --]
Hi Andreas,
I've now found (with Thomas' help) your RFC series for socket/core
based cpu hotplug on x86
(https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
sensible enough as far as it goes, but doesn't seem to address a bunch
of the things that I was attempting to do with the cpu-package
proposal - and which we absolutely need for cpu hotplug on Power.
1) What interface do you envisage beyond cpu_add?
The patches I see just construct extra socket and core objects, but
still control hotplug (for x86) through the cpu_add interface. That
interface is absolutely unusable on Power, since it operates on a
per-thread basis, whereas the PAPR guest<->host interfaces can only
communicate information at a per-core granularity.
2) When hotplugging at core or socket granularity, where would the
code to construct the individual thread objects sit?
Your series has the construction done in both the machine init path
and the hotplug path. The latter works because hotplug occurs at
thread granularity. If we're hotplugging at core or socket
granularity what would do the construct? The core/socket object
itself (in instance_init? in realize?); the hotplug handler?
something else?
3) How does the management layer determine what is pluggable?
Both the number of pluggable slots, and what it will need to do to
populate them.
4) How do we enforce that toplogies illegal for the platform can't be
constructed?
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-23 5:24 [Qemu-devel] CPU hotplug, again David Gibson
@ 2016-02-23 9:40 ` Bharata B Rao
2016-02-23 9:49 ` Bharata B Rao
2016-02-23 10:05 ` David Gibson
0 siblings, 2 replies; 12+ messages in thread
From: Bharata B Rao @ 2016-02-23 9:40 UTC (permalink / raw)
To: David Gibson
Cc: thuth, ehabkost, armbru, agraf, qemu-devel, qemu-ppc, pbonzini,
imammedo, Andreas Färber
On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> Hi Andreas,
>
> I've now found (with Thomas' help) your RFC series for socket/core
> based cpu hotplug on x86
> (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> sensible enough as far as it goes, but doesn't seem to address a bunch
> of the things that I was attempting to do with the cpu-package
> proposal - and which we absolutely need for cpu hotplug on Power.
>
> 1) What interface do you envisage beyond cpu_add?
>
> The patches I see just construct extra socket and core objects, but
> still control hotplug (for x86) through the cpu_add interface. That
> interface is absolutely unusable on Power, since it operates on a
> per-thread basis, whereas the PAPR guest<->host interfaces can only
> communicate information at a per-core granularity.
>
> 2) When hotplugging at core or socket granularity, where would the
> code to construct the individual thread objects sit?
>
> Your series has the construction done in both the machine init path
> and the hotplug path. The latter works because hotplug occurs at
> thread granularity. If we're hotplugging at core or socket
> granularity what would do the construct? The core/socket object
> itself (in instance_init? in realize?); the hotplug handler?
> something else?
>
> 3) How does the management layer determine what is pluggable?
>
> Both the number of pluggable slots, and what it will need to do to
> populate them.
>
> 4) How do we enforce that toplogies illegal for the platform can't be
> constructed?
5) QOM-links
Andreas, You have often talked about setting up links from machine object
to the CPU objects. Would the below code correctly capture that idea of
yours ?
#define SPAPR_MACHINE_CPU_CORE_PROP "core"
/* MachineClass.init for sPAPR */
static void ppc_spapr_init(MachineState *machine)
{
sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
int spapr_smp_cores = smp_cpus / smp_threads;
int spapr_max_cores = max_cpus / smp_threads;
...
for (i = 0; i < spapr_max_cores; i++) {
Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
char name[32];
snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
/*
* Create links from machine objects to all possible cores.
*/
object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
(Object **)&spapr->core[i],
NULL, NULL, &error_abort);
/*
* Set the QOM link from machine object to core object for all
* boot time CPUs specified with -smp. For rest of the hotpluggable
* cores this is done from the core hotplug path.
*/
if (i < spapr_smp_cores) {
object_property_set_link(OBJECT(spapr), OBJECT(core),
SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
}
...
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-23 9:40 ` Bharata B Rao
@ 2016-02-23 9:49 ` Bharata B Rao
2016-02-23 10:05 ` David Gibson
1 sibling, 0 replies; 12+ messages in thread
From: Bharata B Rao @ 2016-02-23 9:49 UTC (permalink / raw)
To: David Gibson
Cc: thuth, ehabkost, armbru, agraf, qemu-devel, qemu-ppc, pbonzini,
imammedo, Andreas Färber
On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
>
> 5) QOM-links
>
> Andreas, You have often talked about setting up links from machine object
> to the CPU objects. Would the below code correctly capture that idea of
> yours ?
>
> #define SPAPR_MACHINE_CPU_CORE_PROP "core"
>
> /* MachineClass.init for sPAPR */
> static void ppc_spapr_init(MachineState *machine)
> {
> sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> int spapr_smp_cores = smp_cpus / smp_threads;
> int spapr_max_cores = max_cpus / smp_threads;
>
> ...
> for (i = 0; i < spapr_max_cores; i++) {
> Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> char name[32];
>
> snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
>
> /*
> * Create links from machine objects to all possible cores.
> */
> object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> (Object **)&spapr->core[i],
> NULL, NULL, &error_abort);
>
> /*
> * Set the QOM link from machine object to core object for all
> * boot time CPUs specified with -smp. For rest of the hotpluggable
> * cores this is done from the core hotplug path.
> */
> if (i < spapr_smp_cores) {
> object_property_set_link(OBJECT(spapr), OBJECT(core),
> SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> }
BTW s/SPAPR_MACHINE_CPU_CORE_PROP/name in object_property_set_link() above.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-23 9:40 ` Bharata B Rao
2016-02-23 9:49 ` Bharata B Rao
@ 2016-02-23 10:05 ` David Gibson
2016-02-23 11:18 ` Igor Mammedov
1 sibling, 1 reply; 12+ messages in thread
From: David Gibson @ 2016-02-23 10:05 UTC (permalink / raw)
To: Bharata B Rao
Cc: thuth, ehabkost, armbru, agraf, qemu-devel, qemu-ppc, pbonzini,
imammedo, Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 3835 bytes --]
On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > Hi Andreas,
> >
> > I've now found (with Thomas' help) your RFC series for socket/core
> > based cpu hotplug on x86
> > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > sensible enough as far as it goes, but doesn't seem to address a bunch
> > of the things that I was attempting to do with the cpu-package
> > proposal - and which we absolutely need for cpu hotplug on Power.
> >
> > 1) What interface do you envisage beyond cpu_add?
> >
> > The patches I see just construct extra socket and core objects, but
> > still control hotplug (for x86) through the cpu_add interface. That
> > interface is absolutely unusable on Power, since it operates on a
> > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > communicate information at a per-core granularity.
> >
> > 2) When hotplugging at core or socket granularity, where would the
> > code to construct the individual thread objects sit?
> >
> > Your series has the construction done in both the machine init path
> > and the hotplug path. The latter works because hotplug occurs at
> > thread granularity. If we're hotplugging at core or socket
> > granularity what would do the construct? The core/socket object
> > itself (in instance_init? in realize?); the hotplug handler?
> > something else?
> >
> > 3) How does the management layer determine what is pluggable?
> >
> > Both the number of pluggable slots, and what it will need to do to
> > populate them.
> >
> > 4) How do we enforce that toplogies illegal for the platform can't be
> > constructed?
>
> 5) QOM-links
>
> Andreas, You have often talked about setting up links from machine object
> to the CPU objects. Would the below code correctly capture that idea of
> yours ?
>
> #define SPAPR_MACHINE_CPU_CORE_PROP "core"
>
> /* MachineClass.init for sPAPR */
> static void ppc_spapr_init(MachineState *machine)
> {
> sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> int spapr_smp_cores = smp_cpus / smp_threads;
> int spapr_max_cores = max_cpus / smp_threads;
>
> ...
> for (i = 0; i < spapr_max_cores; i++) {
> Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> char name[32];
>
> snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
>
> /*
> * Create links from machine objects to all possible cores.
> */
> object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> (Object **)&spapr->core[i],
> NULL, NULL, &error_abort);
>
> /*
> * Set the QOM link from machine object to core object for all
> * boot time CPUs specified with -smp. For rest of the hotpluggable
> * cores this is done from the core hotplug path.
> */
> if (i < spapr_smp_cores) {
> object_property_set_link(OBJECT(spapr), OBJECT(core),
> SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
I hope we can at least have a helper function to both construct the
core and create the links, if we can't handle the link creation in the
core object itself.
Having to open-code it in each machine sounds like a recipe for subtle
differences in presentation between platforms, which is exactly what
we want to avoid.
> }
> ...
> }
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-23 10:05 ` David Gibson
@ 2016-02-23 11:18 ` Igor Mammedov
2016-02-24 2:01 ` David Gibson
0 siblings, 1 reply; 12+ messages in thread
From: Igor Mammedov @ 2016-02-23 11:18 UTC (permalink / raw)
To: David Gibson
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
On Tue, 23 Feb 2016 21:05:04 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > Hi Andreas,
> > >
> > > I've now found (with Thomas' help) your RFC series for socket/core
> > > based cpu hotplug on x86
> > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > of the things that I was attempting to do with the cpu-package
> > > proposal - and which we absolutely need for cpu hotplug on Power.
> > >
> > > 1) What interface do you envisage beyond cpu_add?
> > >
> > > The patches I see just construct extra socket and core objects, but
> > > still control hotplug (for x86) through the cpu_add interface. That
> > > interface is absolutely unusable on Power, since it operates on a
> > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > communicate information at a per-core granularity.
> > >
> > > 2) When hotplugging at core or socket granularity, where would the
> > > code to construct the individual thread objects sit?
> > >
> > > Your series has the construction done in both the machine init path
> > > and the hotplug path. The latter works because hotplug occurs at
> > > thread granularity. If we're hotplugging at core or socket
> > > granularity what would do the construct? The core/socket object
> > > itself (in instance_init? in realize?); the hotplug handler?
> > > something else?
> > >
> > > 3) How does the management layer determine what is pluggable?
> > >
> > > Both the number of pluggable slots, and what it will need to do to
> > > populate them.
> > >
> > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > constructed?
> >
> > 5) QOM-links
> >
> > Andreas, You have often talked about setting up links from machine object
> > to the CPU objects. Would the below code correctly capture that idea of
> > yours ?
> >
> > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> >
> > /* MachineClass.init for sPAPR */
> > static void ppc_spapr_init(MachineState *machine)
> > {
> > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > int spapr_smp_cores = smp_cpus / smp_threads;
> > int spapr_max_cores = max_cpus / smp_threads;
> >
> > ...
> > for (i = 0; i < spapr_max_cores; i++) {
> > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > char name[32];
> >
> > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> >
> > /*
> > * Create links from machine objects to all possible cores.
> > */
> > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > (Object **)&spapr->core[i],
> > NULL, NULL, &error_abort);
> >
> > /*
> > * Set the QOM link from machine object to core object for all
> > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > * cores this is done from the core hotplug path.
> > */
> > if (i < spapr_smp_cores) {
> > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
>
> I hope we can at least have a helper function to both construct the
> core and create the links, if we can't handle the link creation in the
> core object itself.
>
> Having to open-code it in each machine sounds like a recipe for subtle
> differences in presentation between platforms, which is exactly what
> we want to avoid.
Creating links doesn't give us much, it's just adds means for mgmt
to check how many CPUs could be hotplugged without keeping that
state in mgmt like it's now, so links are mostly useless if one
care where CPU is being plugged in.
The rest like enumerating exiting CPUs could be done by
traversing QOM tree, links would just simplify finding
CPUs putting them at fixed namespace.
>
> > }
> > ...
> > }
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-23 11:18 ` Igor Mammedov
@ 2016-02-24 2:01 ` David Gibson
2016-02-24 4:02 ` Bharata B Rao
2016-02-24 10:48 ` Igor Mammedov
0 siblings, 2 replies; 12+ messages in thread
From: David Gibson @ 2016-02-24 2:01 UTC (permalink / raw)
To: Igor Mammedov
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 4910 bytes --]
On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> On Tue, 23 Feb 2016 21:05:04 +1100
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > Hi Andreas,
> > > >
> > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > based cpu hotplug on x86
> > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > of the things that I was attempting to do with the cpu-package
> > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > >
> > > > 1) What interface do you envisage beyond cpu_add?
> > > >
> > > > The patches I see just construct extra socket and core objects, but
> > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > interface is absolutely unusable on Power, since it operates on a
> > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > communicate information at a per-core granularity.
> > > >
> > > > 2) When hotplugging at core or socket granularity, where would the
> > > > code to construct the individual thread objects sit?
> > > >
> > > > Your series has the construction done in both the machine init path
> > > > and the hotplug path. The latter works because hotplug occurs at
> > > > thread granularity. If we're hotplugging at core or socket
> > > > granularity what would do the construct? The core/socket object
> > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > something else?
> > > >
> > > > 3) How does the management layer determine what is pluggable?
> > > >
> > > > Both the number of pluggable slots, and what it will need to do to
> > > > populate them.
> > > >
> > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > constructed?
> > >
> > > 5) QOM-links
> > >
> > > Andreas, You have often talked about setting up links from machine object
> > > to the CPU objects. Would the below code correctly capture that idea of
> > > yours ?
> > >
> > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > >
> > > /* MachineClass.init for sPAPR */
> > > static void ppc_spapr_init(MachineState *machine)
> > > {
> > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > int spapr_max_cores = max_cpus / smp_threads;
> > >
> > > ...
> > > for (i = 0; i < spapr_max_cores; i++) {
> > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > char name[32];
> > >
> > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > >
> > > /*
> > > * Create links from machine objects to all possible cores.
> > > */
> > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > (Object **)&spapr->core[i],
> > > NULL, NULL, &error_abort);
> > >
> > > /*
> > > * Set the QOM link from machine object to core object for all
> > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > * cores this is done from the core hotplug path.
> > > */
> > > if (i < spapr_smp_cores) {
> > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> >
> > I hope we can at least have a helper function to both construct the
> > core and create the links, if we can't handle the link creation in the
> > core object itself.
> >
> > Having to open-code it in each machine sounds like a recipe for subtle
> > differences in presentation between platforms, which is exactly what
> > we want to avoid.
> Creating links doesn't give us much, it's just adds means for mgmt
> to check how many CPUs could be hotplugged without keeping that
> state in mgmt like it's now, so links are mostly useless if one
> care where CPU is being plugged in.
> The rest like enumerating exiting CPUs could be done by
> traversing QOM tree, links would just simplify finding
> CPUs putting them at fixed namespace.
Simplifying finding CPUs is pretty much all we intended the links for.
Well, and then I was expecting a different set of links to simplify
enumerating all the threads in a cpu package/core/socket/whatever.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-24 2:01 ` David Gibson
@ 2016-02-24 4:02 ` Bharata B Rao
2016-02-24 10:48 ` Igor Mammedov
1 sibling, 0 replies; 12+ messages in thread
From: Bharata B Rao @ 2016-02-24 4:02 UTC (permalink / raw)
To: David Gibson
Cc: thuth, ehabkost, armbru, qemu-devel, agraf, qemu-ppc, pbonzini,
Igor Mammedov, Andreas Färber
On Wed, Feb 24, 2016 at 01:01:06PM +1100, David Gibson wrote:
> On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > On Tue, 23 Feb 2016 21:05:04 +1100
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > Hi Andreas,
> > > > >
> > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > based cpu hotplug on x86
> > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > of the things that I was attempting to do with the cpu-package
> > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > >
> > > > > 1) What interface do you envisage beyond cpu_add?
> > > > >
> > > > > The patches I see just construct extra socket and core objects, but
> > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > communicate information at a per-core granularity.
> > > > >
> > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > code to construct the individual thread objects sit?
> > > > >
> > > > > Your series has the construction done in both the machine init path
> > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > thread granularity. If we're hotplugging at core or socket
> > > > > granularity what would do the construct? The core/socket object
> > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > something else?
> > > > >
> > > > > 3) How does the management layer determine what is pluggable?
> > > > >
> > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > populate them.
> > > > >
> > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > constructed?
> > > >
> > > > 5) QOM-links
> > > >
> > > > Andreas, You have often talked about setting up links from machine object
> > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > yours ?
> > > >
> > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > >
> > > > /* MachineClass.init for sPAPR */
> > > > static void ppc_spapr_init(MachineState *machine)
> > > > {
> > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > int spapr_max_cores = max_cpus / smp_threads;
> > > >
> > > > ...
> > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > char name[32];
> > > >
> > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > >
> > > > /*
> > > > * Create links from machine objects to all possible cores.
> > > > */
> > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > (Object **)&spapr->core[i],
> > > > NULL, NULL, &error_abort);
> > > >
> > > > /*
> > > > * Set the QOM link from machine object to core object for all
> > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > * cores this is done from the core hotplug path.
> > > > */
> > > > if (i < spapr_smp_cores) {
> > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > >
> > > I hope we can at least have a helper function to both construct the
> > > core and create the links, if we can't handle the link creation in the
> > > core object itself.
> > >
> > > Having to open-code it in each machine sounds like a recipe for subtle
> > > differences in presentation between platforms, which is exactly what
> > > we want to avoid.
> > Creating links doesn't give us much, it's just adds means for mgmt
> > to check how many CPUs could be hotplugged without keeping that
> > state in mgmt like it's now, so links are mostly useless if one
> > care where CPU is being plugged in.
> > The rest like enumerating exiting CPUs could be done by
> > traversing QOM tree, links would just simplify finding
> > CPUs putting them at fixed namespace.
>
> Simplifying finding CPUs is pretty much all we intended the links for.
> Well, and then I was expecting a different set of links to simplify
> enumerating all the threads in a cpu package/core/socket/whatever.
That's what child links (socket to core to thread on x86 and core to thread
on powerpc) will give us, no ?
Regards,
Bharata.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-24 2:01 ` David Gibson
2016-02-24 4:02 ` Bharata B Rao
@ 2016-02-24 10:48 ` Igor Mammedov
2016-02-24 11:28 ` David Gibson
1 sibling, 1 reply; 12+ messages in thread
From: Igor Mammedov @ 2016-02-24 10:48 UTC (permalink / raw)
To: David Gibson
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
On Wed, 24 Feb 2016 13:01:06 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > On Tue, 23 Feb 2016 21:05:04 +1100
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > Hi Andreas,
> > > > >
> > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > based cpu hotplug on x86
> > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > of the things that I was attempting to do with the cpu-package
> > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > >
> > > > > 1) What interface do you envisage beyond cpu_add?
> > > > >
> > > > > The patches I see just construct extra socket and core objects, but
> > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > communicate information at a per-core granularity.
> > > > >
> > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > code to construct the individual thread objects sit?
> > > > >
> > > > > Your series has the construction done in both the machine init path
> > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > thread granularity. If we're hotplugging at core or socket
> > > > > granularity what would do the construct? The core/socket object
> > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > something else?
> > > > >
> > > > > 3) How does the management layer determine what is pluggable?
> > > > >
> > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > populate them.
> > > > >
> > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > constructed?
> > > >
> > > > 5) QOM-links
> > > >
> > > > Andreas, You have often talked about setting up links from machine object
> > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > yours ?
> > > >
> > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > >
> > > > /* MachineClass.init for sPAPR */
> > > > static void ppc_spapr_init(MachineState *machine)
> > > > {
> > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > int spapr_max_cores = max_cpus / smp_threads;
> > > >
> > > > ...
> > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > char name[32];
> > > >
> > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > >
> > > > /*
> > > > * Create links from machine objects to all possible cores.
> > > > */
> > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > (Object **)&spapr->core[i],
> > > > NULL, NULL, &error_abort);
> > > >
> > > > /*
> > > > * Set the QOM link from machine object to core object for all
> > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > * cores this is done from the core hotplug path.
> > > > */
> > > > if (i < spapr_smp_cores) {
> > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > >
> > > I hope we can at least have a helper function to both construct the
> > > core and create the links, if we can't handle the link creation in the
> > > core object itself.
> > >
> > > Having to open-code it in each machine sounds like a recipe for subtle
> > > differences in presentation between platforms, which is exactly what
> > > we want to avoid.
> > Creating links doesn't give us much, it's just adds means for mgmt
> > to check how many CPUs could be hotplugged without keeping that
> > state in mgmt like it's now, so links are mostly useless if one
> > care where CPU is being plugged in.
> > The rest like enumerating exiting CPUs could be done by
> > traversing QOM tree, links would just simplify finding
> > CPUs putting them at fixed namespace.
>
> Simplifying finding CPUs is pretty much all we intended the links for.
Do mgmt really needs it? For machine it's easy to find CPUs under
/machine/[peripheral|unattached] by enumerating entries over there.
For human, one would need to implement a dedicated HMP command that
would do the same, so it doesn't really matter where links are located.
> Well, and then I was expecting a different set of links to simplify
> enumerating all the threads in a cpu package/core/socket/whatever.
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-24 10:48 ` Igor Mammedov
@ 2016-02-24 11:28 ` David Gibson
2016-02-24 13:41 ` Igor Mammedov
0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2016-02-24 11:28 UTC (permalink / raw)
To: Igor Mammedov
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 6020 bytes --]
On Wed, Feb 24, 2016 at 11:48:33AM +0100, Igor Mammedov wrote:
> On Wed, 24 Feb 2016 13:01:06 +1100
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > > On Tue, 23 Feb 2016 21:05:04 +1100
> > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > >
> > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > > Hi Andreas,
> > > > > >
> > > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > > based cpu hotplug on x86
> > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > > of the things that I was attempting to do with the cpu-package
> > > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > > >
> > > > > > 1) What interface do you envisage beyond cpu_add?
> > > > > >
> > > > > > The patches I see just construct extra socket and core objects, but
> > > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > > communicate information at a per-core granularity.
> > > > > >
> > > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > > code to construct the individual thread objects sit?
> > > > > >
> > > > > > Your series has the construction done in both the machine init path
> > > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > > thread granularity. If we're hotplugging at core or socket
> > > > > > granularity what would do the construct? The core/socket object
> > > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > > something else?
> > > > > >
> > > > > > 3) How does the management layer determine what is pluggable?
> > > > > >
> > > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > > populate them.
> > > > > >
> > > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > > constructed?
> > > > >
> > > > > 5) QOM-links
> > > > >
> > > > > Andreas, You have often talked about setting up links from machine object
> > > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > > yours ?
> > > > >
> > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > > >
> > > > > /* MachineClass.init for sPAPR */
> > > > > static void ppc_spapr_init(MachineState *machine)
> > > > > {
> > > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > > int spapr_max_cores = max_cpus / smp_threads;
> > > > >
> > > > > ...
> > > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > > char name[32];
> > > > >
> > > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > > >
> > > > > /*
> > > > > * Create links from machine objects to all possible cores.
> > > > > */
> > > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > > (Object **)&spapr->core[i],
> > > > > NULL, NULL, &error_abort);
> > > > >
> > > > > /*
> > > > > * Set the QOM link from machine object to core object for all
> > > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > > * cores this is done from the core hotplug path.
> > > > > */
> > > > > if (i < spapr_smp_cores) {
> > > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > > >
> > > > I hope we can at least have a helper function to both construct the
> > > > core and create the links, if we can't handle the link creation in the
> > > > core object itself.
> > > >
> > > > Having to open-code it in each machine sounds like a recipe for subtle
> > > > differences in presentation between platforms, which is exactly what
> > > > we want to avoid.
> > > Creating links doesn't give us much, it's just adds means for mgmt
> > > to check how many CPUs could be hotplugged without keeping that
> > > state in mgmt like it's now, so links are mostly useless if one
> > > care where CPU is being plugged in.
> > > The rest like enumerating exiting CPUs could be done by
> > > traversing QOM tree, links would just simplify finding
> > > CPUs putting them at fixed namespace.
> >
> > Simplifying finding CPUs is pretty much all we intended the links for.
> Do mgmt really needs it? For machine it's easy to find CPUs under
> /machine/[peripheral|unattached] by enumerating entries over there.
> For human, one would need to implement a dedicated HMP command that
> would do the same, so it doesn't really matter where links are
> located.
If we require management to go searching the whole device tree for
cpus, I'm concerned they'll just assume they're in the x86 location
instead, and we'll have to fix it over and over for every platform
that puts them somewhere different.
> > Well, and then I was expecting a different set of links to simplify
> > enumerating all the threads in a cpu package/core/socket/whatever.
> >
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-24 11:28 ` David Gibson
@ 2016-02-24 13:41 ` Igor Mammedov
2016-02-25 6:41 ` David Gibson
0 siblings, 1 reply; 12+ messages in thread
From: Igor Mammedov @ 2016-02-24 13:41 UTC (permalink / raw)
To: David Gibson
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
On Wed, 24 Feb 2016 22:28:22 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, Feb 24, 2016 at 11:48:33AM +0100, Igor Mammedov wrote:
> > On Wed, 24 Feb 2016 13:01:06 +1100
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > > > On Tue, 23 Feb 2016 21:05:04 +1100
> > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > >
> > > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > > > Hi Andreas,
> > > > > > >
> > > > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > > > based cpu hotplug on x86
> > > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > > > of the things that I was attempting to do with the cpu-package
> > > > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > > > >
> > > > > > > 1) What interface do you envisage beyond cpu_add?
> > > > > > >
> > > > > > > The patches I see just construct extra socket and core objects, but
> > > > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > > > communicate information at a per-core granularity.
> > > > > > >
> > > > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > > > code to construct the individual thread objects sit?
> > > > > > >
> > > > > > > Your series has the construction done in both the machine init path
> > > > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > > > thread granularity. If we're hotplugging at core or socket
> > > > > > > granularity what would do the construct? The core/socket object
> > > > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > > > something else?
> > > > > > >
> > > > > > > 3) How does the management layer determine what is pluggable?
> > > > > > >
> > > > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > > > populate them.
> > > > > > >
> > > > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > > > constructed?
> > > > > >
> > > > > > 5) QOM-links
> > > > > >
> > > > > > Andreas, You have often talked about setting up links from machine object
> > > > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > > > yours ?
> > > > > >
> > > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > > > >
> > > > > > /* MachineClass.init for sPAPR */
> > > > > > static void ppc_spapr_init(MachineState *machine)
> > > > > > {
> > > > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > > > int spapr_max_cores = max_cpus / smp_threads;
> > > > > >
> > > > > > ...
> > > > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > > > char name[32];
> > > > > >
> > > > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > > > >
> > > > > > /*
> > > > > > * Create links from machine objects to all possible cores.
> > > > > > */
> > > > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > > > (Object **)&spapr->core[i],
> > > > > > NULL, NULL, &error_abort);
> > > > > >
> > > > > > /*
> > > > > > * Set the QOM link from machine object to core object for all
> > > > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > > > * cores this is done from the core hotplug path.
> > > > > > */
> > > > > > if (i < spapr_smp_cores) {
> > > > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > > > >
> > > > > I hope we can at least have a helper function to both construct the
> > > > > core and create the links, if we can't handle the link creation in the
> > > > > core object itself.
> > > > >
> > > > > Having to open-code it in each machine sounds like a recipe for subtle
> > > > > differences in presentation between platforms, which is exactly what
> > > > > we want to avoid.
> > > > Creating links doesn't give us much, it's just adds means for mgmt
> > > > to check how many CPUs could be hotplugged without keeping that
> > > > state in mgmt like it's now, so links are mostly useless if one
> > > > care where CPU is being plugged in.
> > > > The rest like enumerating exiting CPUs could be done by
> > > > traversing QOM tree, links would just simplify finding
> > > > CPUs putting them at fixed namespace.
> > >
> > > Simplifying finding CPUs is pretty much all we intended the links for.
> > Do mgmt really needs it? For machine it's easy to find CPUs under
> > /machine/[peripheral|unattached] by enumerating entries over there.
> > For human, one would need to implement a dedicated HMP command that
> > would do the same, so it doesn't really matter where links are
> > located.
>
> If we require management to go searching the whole device tree for
> cpus, I'm concerned they'll just assume they're in the x86 location
> instead, and we'll have to fix it over and over for every platform
> that puts them somewhere different.
CPUs are inherited from Device so inherited behaviour is that they
are pretty much at fixed location /machine/[peripheral|unattached]
regardless of platform QOM tree wise, like every other device.
> > > Well, and then I was expecting a different set of links to simplify
> > > enumerating all the threads in a cpu package/core/socket/whatever.
> > >
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-24 13:41 ` Igor Mammedov
@ 2016-02-25 6:41 ` David Gibson
2016-02-25 13:29 ` Igor Mammedov
0 siblings, 1 reply; 12+ messages in thread
From: David Gibson @ 2016-02-25 6:41 UTC (permalink / raw)
To: Igor Mammedov
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
[-- Attachment #1: Type: text/plain, Size: 7037 bytes --]
On Wed, Feb 24, 2016 at 02:41:17PM +0100, Igor Mammedov wrote:
> On Wed, 24 Feb 2016 22:28:22 +1100
> David Gibson <david@gibson.dropbear.id.au> wrote:
>
> > On Wed, Feb 24, 2016 at 11:48:33AM +0100, Igor Mammedov wrote:
> > > On Wed, 24 Feb 2016 13:01:06 +1100
> > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > >
> > > > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > > > > On Tue, 23 Feb 2016 21:05:04 +1100
> > > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > > >
> > > > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > > > > Hi Andreas,
> > > > > > > >
> > > > > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > > > > based cpu hotplug on x86
> > > > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > > > > of the things that I was attempting to do with the cpu-package
> > > > > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > > > > >
> > > > > > > > 1) What interface do you envisage beyond cpu_add?
> > > > > > > >
> > > > > > > > The patches I see just construct extra socket and core objects, but
> > > > > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > > > > communicate information at a per-core granularity.
> > > > > > > >
> > > > > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > > > > code to construct the individual thread objects sit?
> > > > > > > >
> > > > > > > > Your series has the construction done in both the machine init path
> > > > > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > > > > thread granularity. If we're hotplugging at core or socket
> > > > > > > > granularity what would do the construct? The core/socket object
> > > > > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > > > > something else?
> > > > > > > >
> > > > > > > > 3) How does the management layer determine what is pluggable?
> > > > > > > >
> > > > > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > > > > populate them.
> > > > > > > >
> > > > > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > > > > constructed?
> > > > > > >
> > > > > > > 5) QOM-links
> > > > > > >
> > > > > > > Andreas, You have often talked about setting up links from machine object
> > > > > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > > > > yours ?
> > > > > > >
> > > > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > > > > >
> > > > > > > /* MachineClass.init for sPAPR */
> > > > > > > static void ppc_spapr_init(MachineState *machine)
> > > > > > > {
> > > > > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > > > > int spapr_max_cores = max_cpus / smp_threads;
> > > > > > >
> > > > > > > ...
> > > > > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > > > > char name[32];
> > > > > > >
> > > > > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > > > > >
> > > > > > > /*
> > > > > > > * Create links from machine objects to all possible cores.
> > > > > > > */
> > > > > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > > > > (Object **)&spapr->core[i],
> > > > > > > NULL, NULL, &error_abort);
> > > > > > >
> > > > > > > /*
> > > > > > > * Set the QOM link from machine object to core object for all
> > > > > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > > > > * cores this is done from the core hotplug path.
> > > > > > > */
> > > > > > > if (i < spapr_smp_cores) {
> > > > > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > > > > >
> > > > > > I hope we can at least have a helper function to both construct the
> > > > > > core and create the links, if we can't handle the link creation in the
> > > > > > core object itself.
> > > > > >
> > > > > > Having to open-code it in each machine sounds like a recipe for subtle
> > > > > > differences in presentation between platforms, which is exactly what
> > > > > > we want to avoid.
> > > > > Creating links doesn't give us much, it's just adds means for mgmt
> > > > > to check how many CPUs could be hotplugged without keeping that
> > > > > state in mgmt like it's now, so links are mostly useless if one
> > > > > care where CPU is being plugged in.
> > > > > The rest like enumerating exiting CPUs could be done by
> > > > > traversing QOM tree, links would just simplify finding
> > > > > CPUs putting them at fixed namespace.
> > > >
> > > > Simplifying finding CPUs is pretty much all we intended the links for.
> > > Do mgmt really needs it? For machine it's easy to find CPUs under
> > > /machine/[peripheral|unattached] by enumerating entries over there.
> > > For human, one would need to implement a dedicated HMP command that
> > > would do the same, so it doesn't really matter where links are
> > > located.
> >
> > If we require management to go searching the whole device tree for
> > cpus, I'm concerned they'll just assume they're in the x86 location
> > instead, and we'll have to fix it over and over for every platform
> > that puts them somewhere different.
> CPUs are inherited from Device so inherited behaviour is that they
> are pretty much at fixed location /machine/[peripheral|unattached]
> regardless of platform QOM tree wise, like every other device.
Hmm.. that's true now, but I can see reasons you might want to put
CPUs on a different bus in future. In particular consider a machine
type modelling real hardware for a modern multisocket machine - these
are often built from several chips on a common fabric, each containing
several CPU cores, but also other peripherals and bus bridges.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] CPU hotplug, again
2016-02-25 6:41 ` David Gibson
@ 2016-02-25 13:29 ` Igor Mammedov
0 siblings, 0 replies; 12+ messages in thread
From: Igor Mammedov @ 2016-02-25 13:29 UTC (permalink / raw)
To: David Gibson
Cc: agraf, thuth, ehabkost, qemu-devel, armbru, qemu-ppc,
Bharata B Rao, pbonzini, Andreas Färber
On Thu, 25 Feb 2016 17:41:14 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:
> On Wed, Feb 24, 2016 at 02:41:17PM +0100, Igor Mammedov wrote:
> > On Wed, 24 Feb 2016 22:28:22 +1100
> > David Gibson <david@gibson.dropbear.id.au> wrote:
> >
> > > On Wed, Feb 24, 2016 at 11:48:33AM +0100, Igor Mammedov wrote:
> > > > On Wed, 24 Feb 2016 13:01:06 +1100
> > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > >
> > > > > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote:
> > > > > > On Tue, 23 Feb 2016 21:05:04 +1100
> > > > > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > > > >
> > > > > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote:
> > > > > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote:
> > > > > > > > > Hi Andreas,
> > > > > > > > >
> > > > > > > > > I've now found (with Thomas' help) your RFC series for socket/core
> > > > > > > > > based cpu hotplug on x86
> > > > > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems
> > > > > > > > > sensible enough as far as it goes, but doesn't seem to address a bunch
> > > > > > > > > of the things that I was attempting to do with the cpu-package
> > > > > > > > > proposal - and which we absolutely need for cpu hotplug on Power.
> > > > > > > > >
> > > > > > > > > 1) What interface do you envisage beyond cpu_add?
> > > > > > > > >
> > > > > > > > > The patches I see just construct extra socket and core objects, but
> > > > > > > > > still control hotplug (for x86) through the cpu_add interface. That
> > > > > > > > > interface is absolutely unusable on Power, since it operates on a
> > > > > > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only
> > > > > > > > > communicate information at a per-core granularity.
> > > > > > > > >
> > > > > > > > > 2) When hotplugging at core or socket granularity, where would the
> > > > > > > > > code to construct the individual thread objects sit?
> > > > > > > > >
> > > > > > > > > Your series has the construction done in both the machine init path
> > > > > > > > > and the hotplug path. The latter works because hotplug occurs at
> > > > > > > > > thread granularity. If we're hotplugging at core or socket
> > > > > > > > > granularity what would do the construct? The core/socket object
> > > > > > > > > itself (in instance_init? in realize?); the hotplug handler?
> > > > > > > > > something else?
> > > > > > > > >
> > > > > > > > > 3) How does the management layer determine what is pluggable?
> > > > > > > > >
> > > > > > > > > Both the number of pluggable slots, and what it will need to do to
> > > > > > > > > populate them.
> > > > > > > > >
> > > > > > > > > 4) How do we enforce that toplogies illegal for the platform can't be
> > > > > > > > > constructed?
> > > > > > > >
> > > > > > > > 5) QOM-links
> > > > > > > >
> > > > > > > > Andreas, You have often talked about setting up links from machine object
> > > > > > > > to the CPU objects. Would the below code correctly capture that idea of
> > > > > > > > yours ?
> > > > > > > >
> > > > > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core"
> > > > > > > >
> > > > > > > > /* MachineClass.init for sPAPR */
> > > > > > > > static void ppc_spapr_init(MachineState *machine)
> > > > > > > > {
> > > > > > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine);
> > > > > > > > int spapr_smp_cores = smp_cpus / smp_threads;
> > > > > > > > int spapr_max_cores = max_cpus / smp_threads;
> > > > > > > >
> > > > > > > > ...
> > > > > > > > for (i = 0; i < spapr_max_cores; i++) {
> > > > > > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE);
> > > > > > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj);
> > > > > > > > char name[32];
> > > > > > > >
> > > > > > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i);
> > > > > > > >
> > > > > > > > /*
> > > > > > > > * Create links from machine objects to all possible cores.
> > > > > > > > */
> > > > > > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE,
> > > > > > > > (Object **)&spapr->core[i],
> > > > > > > > NULL, NULL, &error_abort);
> > > > > > > >
> > > > > > > > /*
> > > > > > > > * Set the QOM link from machine object to core object for all
> > > > > > > > * boot time CPUs specified with -smp. For rest of the hotpluggable
> > > > > > > > * cores this is done from the core hotplug path.
> > > > > > > > */
> > > > > > > > if (i < spapr_smp_cores) {
> > > > > > > > object_property_set_link(OBJECT(spapr), OBJECT(core),
> > > > > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort);
> > > > > > >
> > > > > > > I hope we can at least have a helper function to both construct the
> > > > > > > core and create the links, if we can't handle the link creation in the
> > > > > > > core object itself.
> > > > > > >
> > > > > > > Having to open-code it in each machine sounds like a recipe for subtle
> > > > > > > differences in presentation between platforms, which is exactly what
> > > > > > > we want to avoid.
> > > > > > Creating links doesn't give us much, it's just adds means for mgmt
> > > > > > to check how many CPUs could be hotplugged without keeping that
> > > > > > state in mgmt like it's now, so links are mostly useless if one
> > > > > > care where CPU is being plugged in.
> > > > > > The rest like enumerating exiting CPUs could be done by
> > > > > > traversing QOM tree, links would just simplify finding
> > > > > > CPUs putting them at fixed namespace.
> > > > >
> > > > > Simplifying finding CPUs is pretty much all we intended the links for.
> > > > Do mgmt really needs it? For machine it's easy to find CPUs under
> > > > /machine/[peripheral|unattached] by enumerating entries over there.
> > > > For human, one would need to implement a dedicated HMP command that
> > > > would do the same, so it doesn't really matter where links are
> > > > located.
> > >
> > > If we require management to go searching the whole device tree for
> > > cpus, I'm concerned they'll just assume they're in the x86 location
> > > instead, and we'll have to fix it over and over for every platform
> > > that puts them somewhere different.
> > CPUs are inherited from Device so inherited behaviour is that they
> > are pretty much at fixed location /machine/[peripheral|unattached]
> > regardless of platform QOM tree wise, like every other device.
>
> Hmm.. that's true now, but I can see reasons you might want to put
> CPUs on a different bus in future. In particular consider a machine
> type modelling real hardware for a modern multisocket machine - these
> are often built from several chips on a common fabric, each containing
> several CPU cores, but also other peripherals and bus bridges.
yes, currently QOM tree doesn't express device models as composition tree
or as bus tree but in future it might. How it will be in the future
it's hard to say that's one of the reasons I prefer QMP vs qom-get/set
interface as it abstracts us from it.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-02-25 13:29 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-23 5:24 [Qemu-devel] CPU hotplug, again David Gibson
2016-02-23 9:40 ` Bharata B Rao
2016-02-23 9:49 ` Bharata B Rao
2016-02-23 10:05 ` David Gibson
2016-02-23 11:18 ` Igor Mammedov
2016-02-24 2:01 ` David Gibson
2016-02-24 4:02 ` Bharata B Rao
2016-02-24 10:48 ` Igor Mammedov
2016-02-24 11:28 ` David Gibson
2016-02-24 13:41 ` Igor Mammedov
2016-02-25 6:41 ` David Gibson
2016-02-25 13:29 ` Igor Mammedov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).