qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: mdroth@linux.vnet.ibm.com, aik@ozlabs.ru, agraf@suse.de,
	qemu-devel@nongnu.org, pbonzini@redhat.com, qemu-ppc@nongnu.org,
	tyreld@linux.vnet.ibm.com, nfont@linux.vnet.ibm.com,
	imammedo@redhat.com, afaerber@suse.de
Subject: Re: [Qemu-devel] [PATCH v5 08/10] spapr: CPU hotplug support
Date: Tue, 1 Dec 2015 10:18:55 +0530	[thread overview]
Message-ID: <20151201044855.GA16342@in.ibm.com> (raw)
In-Reply-To: <20151201013012.GG31343@voom.redhat.com>

On Tue, Dec 01, 2015 at 12:30:12PM +1100, David Gibson wrote:
> On Fri, Nov 20, 2015 at 06:24:37PM +0530, Bharata B Rao wrote:
> > Support CPU hotplug via device-add command. Set up device tree
> > entries for the hotplugged CPU core and use the exising EPOW event
> > infrastructure to send CPU hotplug notification to the guest.
> > 
> > Create only cores explicitly from boot path as well as hotplug path
> > and let the ->plug() handler of the core create the threads of the core.
> > 
> > Also support cold plugged CPUs that are specified by -device option
> > on cmdline.
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > ---
> >  hw/ppc/spapr.c              | 156 +++++++++++++++++++++++++++++++++++++++++++-
> >  hw/ppc/spapr_events.c       |   3 +
> >  hw/ppc/spapr_rtas.c         |  24 +++++++
> >  target-ppc/translate_init.c |   8 +++
> >  4 files changed, 189 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index 814b0a6..4434d45 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -596,6 +596,18 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, int offset,
> >      size_t page_sizes_prop_size;
> >      uint32_t vcpus_per_socket = smp_threads * smp_cores;
> >      uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
> > +    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> > +    sPAPRDRConnector *drc;
> > +    sPAPRDRConnectorClass *drck;
> > +    int drc_index;
> > +
> > +    if (smc->dr_cpu_enabled) {
> > +        drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index);
> > +        g_assert(drc);
> > +        drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> > +        drc_index = drck->get_index(drc);
> > +        _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_index)));
> > +    }
> >  
> >      /* Note: we keep CI large pages off for now because a 64K capable guest
> >       * provisioned with large pages might otherwise try to map a qemu
> > @@ -1739,6 +1751,7 @@ static void ppc_spapr_init(MachineState *machine)
> >      char *filename;
> >      int smt = kvmppc_smt_threads();
> >      int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
> > +    int spapr_smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);
> >  
> >      msi_supported = true;
> >  
> > @@ -1818,7 +1831,7 @@ static void ppc_spapr_init(MachineState *machine)
> >      if (machine->cpu_model == NULL) {
> >          machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
> >      }
> > -    for (i = 0; i < smp_cpus; i++) {
> > +    for (i = 0; i < spapr_smp_cores; i++) {
> >          cpu = cpu_ppc_init(machine->cpu_model);
> >          if (cpu == NULL) {
> >              fprintf(stderr, "Unable to find PowerPC CPU definition\n");
> > @@ -2207,10 +2220,135 @@ out:
> >      error_propagate(errp, local_err);
> >  }
> >  
> > +static void *spapr_populate_hotplug_cpu_dt(DeviceState *dev, CPUState *cs,
> > +                                           int *fdt_offset,
> > +                                           sPAPRMachineState *spapr)
> > +{
> > +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> > +    DeviceClass *dc = DEVICE_GET_CLASS(cs);
> > +    int id = ppc_get_vcpu_dt_id(cpu);
> > +    void *fdt;
> > +    int offset, fdt_size;
> > +    char *nodename;
> > +
> > +    fdt = create_device_tree(&fdt_size);
> > +    nodename = g_strdup_printf("%s@%x", dc->fw_name, id);
> > +    offset = fdt_add_subnode(fdt, 0, nodename);
> > +
> > +    spapr_populate_cpu_dt(cs, fdt, offset, spapr);
> > +    g_free(nodename);
> > +
> > +    *fdt_offset = offset;
> > +    return fdt;
> > +}
> > +
> > +static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
> > +                            Error **errp)
> > +{
> > +    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> > +    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
> > +    CPUState *cs = CPU(dev);
> > +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> > +    int id = ppc_get_vcpu_dt_id(cpu);
> > +    sPAPRDRConnector *drc =
> > +        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, id);
> > +    sPAPRDRConnectorClass *drck;
> > +    int smt = kvmppc_smt_threads();
> > +    Error *local_err = NULL;
> > +    void *fdt = NULL;
> > +    int i, fdt_offset = 0;
> > +
> > +    /* Set NUMA node for the added CPUs  */
> > +    for (i = 0; i < nb_numa_nodes; i++) {
> > +        if (test_bit(cs->cpu_index, numa_info[i].node_cpu)) {
> > +            cs->numa_node = i;
> > +            break;
> > +        }
> > +    }
> > +
> > +    /*
> > +     * Currently CPU core and threads of a core aren't really different
> > +     * from QEMU point of view since all of them are just CPU devices. Hence
> > +     * there is no separate realize routines for cores and threads.
> > +     * We use the id check below to do things differently for cores and threads.
> > +     *
> > +     * SMT threads return from here, only main thread (core) will
> > +     * continue, create threads and signal hotplug event to the guest.
> > +     */
> > +    if ((id % smt) != 0) {
> > +        return;
> > +    }
> 
> Hmm.  It seems odd to me to have thread 0 of a core handle the
> initialization of the other threads, rather than creating an explicit
> Core QOM object and have that construct the cpu objects for all
> threads under it.

All the threads will go through realize call and hence ->plug() call.
So they all come here and we need the above check to ensure that we
raise the EPOW interrupt to the guest only once.

Having said that, I agree that it is not good to have main thread of
the core creating other threads as is being done here.

I am experimenting with an approach based on bits of what was discussed
here earlier

http://lists.nongnu.org/archive/html/qemu-devel/2015-04/msg00620.html

In my current WIP patchset, I do have a core device that will result
in CPU thread QOM objects getting created as children. I am experimenting
to see if it will be possible to specify CPU as a generic device
(for both boot time as well as hotplug) that works for all archs, will post
something by early next week.

Regards,
Bharata.

  reply	other threads:[~2015-12-01  4:50 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-20 12:54 [Qemu-devel] [PATCH v5 00/10] sPAPR CPU hotplug Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 01/10] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
2015-12-01  0:37   ` David Gibson
2015-12-02  3:04     ` Bharata B Rao
2015-12-02 13:52   ` Igor Mammedov
2015-12-02 14:14     ` Eduardo Habkost
2015-12-02 14:38       ` Bharata B Rao
2015-12-02 15:19         ` Eduardo Habkost
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 02/10] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
2015-12-01  0:44   ` David Gibson
2015-12-02  3:47     ` Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 03/10] exec: Do vmstate unregistration from cpu_exec_exit() Bharata B Rao
2015-12-01  0:46   ` David Gibson
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 04/10] cpu: Reclaim vCPU objects Bharata B Rao
2015-11-30  7:30   ` Alexey Kardashevskiy
2015-12-02  3:50     ` Bharata B Rao
2015-12-01  0:55   ` David Gibson
2015-12-02  5:28     ` Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 05/10] cpu: Add a sync version of cpu_remove() Bharata B Rao
2015-12-01  0:57   ` David Gibson
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 06/10] xics_kvm: Add cpu_destroy method to XICS Bharata B Rao
2015-12-01  1:01   ` David Gibson
2015-12-02  5:45     ` Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 07/10] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries Bharata B Rao
2015-12-01  1:06   ` David Gibson
2015-12-02  5:47     ` Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 08/10] spapr: CPU hotplug support Bharata B Rao
2015-12-01  1:30   ` David Gibson
2015-12-01  4:48     ` Bharata B Rao [this message]
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 09/10] spapr: CPU hot unplug support Bharata B Rao
2015-12-01  1:34   ` David Gibson
2015-12-02  5:49     ` Bharata B Rao
2015-11-20 12:54 ` [Qemu-devel] [PATCH v5 10/10] target-ppc: Enable CPU hotplug for POWER8 CPU family Bharata B Rao
2015-11-23 11:54 ` [Qemu-devel] [PATCH v5 00/10] sPAPR CPU hotplug Peter Krempa
2015-11-23 13:04   ` Christian Borntraeger
2015-12-01  1:43     ` David Gibson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151201044855.GA16342@in.ibm.com \
    --to=bharata@linux.vnet.ibm.com \
    --cc=afaerber@suse.de \
    --cc=agraf@suse.de \
    --cc=aik@ozlabs.ru \
    --cc=david@gibson.dropbear.id.au \
    --cc=imammedo@redhat.com \
    --cc=mdroth@linux.vnet.ibm.com \
    --cc=nfont@linux.vnet.ibm.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=tyreld@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).