From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44229) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aYQg7-000743-M3 for qemu-devel@nongnu.org; Tue, 23 Feb 2016 23:03:57 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aYQg4-0003zn-DW for qemu-devel@nongnu.org; Tue, 23 Feb 2016 23:03:55 -0500 Received: from e23smtp01.au.ibm.com ([202.81.31.143]:55155) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aYQg3-0003zc-Qz for qemu-devel@nongnu.org; Tue, 23 Feb 2016 23:03:52 -0500 Received: from localhost by e23smtp01.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 24 Feb 2016 14:03:46 +1000 Date: Wed, 24 Feb 2016 09:32:44 +0530 From: Bharata B Rao Message-ID: <20160224040244.GC21081@in.ibm.com> References: <20160223052431.GS2808@voom.fritz.box> <20160223094026.GA21081@in.ibm.com> <20160223100504.GW2808@voom.fritz.box> <20160223121859.5e93bc68@nial.brq.redhat.com> <20160224020106.GH2808@voom.fritz.box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160224020106.GH2808@voom.fritz.box> Subject: Re: [Qemu-devel] CPU hotplug, again Reply-To: bharata@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson Cc: thuth@redhat.com, ehabkost@redhat.com, armbru@redhat.com, qemu-devel@nongnu.org, agraf@suse.de, qemu-ppc@nongnu.org, pbonzini@redhat.com, Igor Mammedov , Andreas =?iso-8859-1?Q?F=E4rber?= On Wed, Feb 24, 2016 at 01:01:06PM +1100, David Gibson wrote: > On Tue, Feb 23, 2016 at 12:18:59PM +0100, Igor Mammedov wrote: > > On Tue, 23 Feb 2016 21:05:04 +1100 > > David Gibson wrote: > > > > > On Tue, Feb 23, 2016 at 03:10:26PM +0530, Bharata B Rao wrote: > > > > On Tue, Feb 23, 2016 at 04:24:31PM +1100, David Gibson wrote: > > > > > Hi Andreas, > > > > > > > > > > I've now found (with Thomas' help) your RFC series for socket/core > > > > > based cpu hotplug on x86 > > > > > (https://github.com/afaerber/qemu-cpu/compare/qom-cpu-x86). It seems > > > > > sensible enough as far as it goes, but doesn't seem to address a bunch > > > > > of the things that I was attempting to do with the cpu-package > > > > > proposal - and which we absolutely need for cpu hotplug on Power. > > > > > > > > > > 1) What interface do you envisage beyond cpu_add? > > > > > > > > > > The patches I see just construct extra socket and core objects, but > > > > > still control hotplug (for x86) through the cpu_add interface. That > > > > > interface is absolutely unusable on Power, since it operates on a > > > > > per-thread basis, whereas the PAPR guest<->host interfaces can only > > > > > communicate information at a per-core granularity. > > > > > > > > > > 2) When hotplugging at core or socket granularity, where would the > > > > > code to construct the individual thread objects sit? > > > > > > > > > > Your series has the construction done in both the machine init path > > > > > and the hotplug path. The latter works because hotplug occurs at > > > > > thread granularity. If we're hotplugging at core or socket > > > > > granularity what would do the construct? The core/socket object > > > > > itself (in instance_init? in realize?); the hotplug handler? > > > > > something else? > > > > > > > > > > 3) How does the management layer determine what is pluggable? > > > > > > > > > > Both the number of pluggable slots, and what it will need to do to > > > > > populate them. > > > > > > > > > > 4) How do we enforce that toplogies illegal for the platform can't be > > > > > constructed? > > > > > > > > 5) QOM-links > > > > > > > > Andreas, You have often talked about setting up links from machine object > > > > to the CPU objects. Would the below code correctly capture that idea of > > > > yours ? > > > > > > > > #define SPAPR_MACHINE_CPU_CORE_PROP "core" > > > > > > > > /* MachineClass.init for sPAPR */ > > > > static void ppc_spapr_init(MachineState *machine) > > > > { > > > > sPAPRMachineState *spapr = SPAPR_MACHINE(machine); > > > > int spapr_smp_cores = smp_cpus / smp_threads; > > > > int spapr_max_cores = max_cpus / smp_threads; > > > > > > > > ... > > > > for (i = 0; i < spapr_max_cores; i++) { > > > > Object *obj = object_new(TYPE_SPAPR_CPU_CORE); > > > > sPAPRCPUCore *core = SPAPR_CPU_CORE(obj); > > > > char name[32]; > > > > > > > > snprintf(name, sizeof(name), "%s[%d]", SPAPR_MACHINE_CPU_CORE_PROP, i); > > > > > > > > /* > > > > * Create links from machine objects to all possible cores. > > > > */ > > > > object_property_add_link(OBJECT(spapr), name, TYPE_SPAPR_CPU_CORE, > > > > (Object **)&spapr->core[i], > > > > NULL, NULL, &error_abort); > > > > > > > > /* > > > > * Set the QOM link from machine object to core object for all > > > > * boot time CPUs specified with -smp. For rest of the hotpluggable > > > > * cores this is done from the core hotplug path. > > > > */ > > > > if (i < spapr_smp_cores) { > > > > object_property_set_link(OBJECT(spapr), OBJECT(core), > > > > SPAPR_MACHINE_CPU_CORE_PROP, &error_abort); > > > > > > I hope we can at least have a helper function to both construct the > > > core and create the links, if we can't handle the link creation in the > > > core object itself. > > > > > > Having to open-code it in each machine sounds like a recipe for subtle > > > differences in presentation between platforms, which is exactly what > > > we want to avoid. > > Creating links doesn't give us much, it's just adds means for mgmt > > to check how many CPUs could be hotplugged without keeping that > > state in mgmt like it's now, so links are mostly useless if one > > care where CPU is being plugged in. > > The rest like enumerating exiting CPUs could be done by > > traversing QOM tree, links would just simplify finding > > CPUs putting them at fixed namespace. > > Simplifying finding CPUs is pretty much all we intended the links for. > Well, and then I was expecting a different set of links to simplify > enumerating all the threads in a cpu package/core/socket/whatever. That's what child links (socket to core to thread on x86 and core to thread on powerpc) will give us, no ? Regards, Bharata.