From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55237) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7aOo-00074X-V2 for qemu-devel@nongnu.org; Tue, 23 Jun 2015 22:26:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z7aOk-0001vG-E3 for qemu-devel@nongnu.org; Tue, 23 Jun 2015 22:26:50 -0400 Received: from e23smtp01.au.ibm.com ([202.81.31.143]:41503) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z7aOj-0001uS-SY for qemu-devel@nongnu.org; Tue, 23 Jun 2015 22:26:46 -0400 Received: from /spool/local by e23smtp01.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 24 Jun 2015 12:26:42 +1000 Date: Wed, 24 Jun 2015 07:55:44 +0530 From: Bharata B Rao Message-ID: <20150624022544.GD26051@in.ibm.com> References: <1434709077-17491-1-git-send-email-bharata@linux.vnet.ibm.com> <1434709077-17491-4-git-send-email-bharata@linux.vnet.ibm.com> <20150623015429.GX13352@voom.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150623015429.GX13352@voom.redhat.com> Subject: Re: [Qemu-devel] [RFC PATCH v4 3/5] spapr: Support ibm, dynamic-reconfiguration-memory Reply-To: bharata@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Gibson Cc: mdroth@linux.vnet.ibm.com, aik@ozlabs.ru, agraf@suse.de, qemu-devel@nongnu.org, qemu-ppc@nongnu.org, tyreld@linux.vnet.ibm.com, nfont@linux.vnet.ibm.com, imammedo@redhat.com On Tue, Jun 23, 2015 at 11:54:29AM +1000, David Gibson wrote: > On Fri, Jun 19, 2015 at 03:47:55PM +0530, Bharata B Rao wrote: > > Parse ibm,architecture.vec table obtained from the guest and enable > > memory node configuration via ibm,dynamic-reconfiguration-memory if guest > > supports it. This is in preparation to support memory hotplug for > > sPAPR guests. > > > > This changes the way memory node configuration is done. Currently all > > memory nodes are built upfront. But after this patch, only memory@0 node > > for RMA is built upfront. Guest kernel boots with just that and rest of > > the memory nodes (via memory@XXX or ibm,dynamic-reconfiguration-memory) > > are built when guest does ibm,client-architecture-support call. > > > > Note: This patch needs a SLOF enhancement which is already part of > > SLOF binary in QEMU. > > > > Signed-off-by: Bharata B Rao > > [snip] > > +int spapr_h_cas_compose_response(sPAPRMachineState *spapr, > > + target_ulong addr, target_ulong size, > > + bool cpu_update, bool memory_update) > > +{ > > + void *fdt, *fdt_skel; > > + sPAPRDeviceTreeUpdateHeader hdr = { .version_id = 1 }; > > + > > + size -= sizeof(hdr); > > + > > + /* Create sceleton */ > > + fdt_skel = g_malloc0(size); > > + _FDT((fdt_create(fdt_skel, size))); > > + _FDT((fdt_begin_node(fdt_skel, ""))); > > + _FDT((fdt_end_node(fdt_skel))); > > + _FDT((fdt_finish(fdt_skel))); > > + fdt = g_malloc0(size); > > + _FDT((fdt_open_into(fdt_skel, fdt, size))); > > + g_free(fdt_skel); > > + > > + /* Fixup cpu nodes */ > > + if (cpu_update) { > > + _FDT((spapr_fixup_cpu_dt(fdt, spapr))); > > + } > > The cpu_update parameter seems like its not related to memory hotplug > at all. I'm guessing it relates to CPU hotplug, in which case please > defer it until those patches are ready to go. This change isn't related to cpu hotplug. Earlier this compose response routine only did CPU device tree fixup based on some conditions. I have enabled it to check for availability DRCONF_MEMORY feature and accordingly fixup memory DT. So this change just checks if cpu fixup is necessary or not. Essentially we aren't changing any behaviour wrt cpu dt fixup here. > > > + > > + /* Generate memory nodes or ibm,dynamic-reconfiguration-memory node */ > > + if (memory_update && spapr->dr_lmb_enabled) { > > + _FDT((spapr_populate_drconf_memory(spapr, fdt))); > > + } else { > > + _FDT((spapr_populate_memory(spapr, fdt))); > > + } > > + > > + /* Pack resulting tree */ > > + _FDT((fdt_pack(fdt))); > > + > > + if (fdt_totalsize(fdt) + sizeof(hdr) > size) { > > + trace_spapr_cas_failed(size); > > + return -1; > > + } > > + > > + cpu_physical_memory_write(addr, &hdr, sizeof(hdr)); > > + cpu_physical_memory_write(addr + sizeof(hdr), fdt, fdt_totalsize(fdt)); > > + trace_spapr_cas_continue(fdt_totalsize(fdt) + sizeof(hdr)); > > + g_free(fdt); > > + > > + return 0; > > +} > > + > > static void spapr_finalize_fdt(sPAPRMachineState *spapr, > > hwaddr fdt_addr, > > hwaddr rtas_addr, > > @@ -756,10 +866,16 @@ static void spapr_finalize_fdt(sPAPRMachineState *spapr, > > /* open out the base tree into a temp buffer for the final tweaks */ > > _FDT((fdt_open_into(spapr->fdt_skel, fdt, FDT_MAX_SIZE))); > > > > - ret = spapr_populate_memory(spapr, fdt); > > - if (ret < 0) { > > - fprintf(stderr, "couldn't setup memory nodes in fdt\n"); > > - exit(1); > > + /* > > + * Add memory@0 node to represent RMA. Rest of the memory is either > > + * represented by memory nodes or ibm,dynamic-reconfiguration-memory > > + * node later during ibm,client-architecture-support call. > > + */ > > + for (i = 0; i < nb_numa_nodes; ++i) { > > + if (numa_info[i].node_mem) { > > + spapr_populate_memory_node(fdt, i, 0, spapr->rma_size); > > + break; > > + } > > ?? The code doesn't seem to match the comment - you appear to be > creating a memory@0 node for every NUMA node, not just for the RMA, > which doesn't make much sense. I have a break there to ensure memory@0 is created only once from the 1st memory-less node. I am slightly changing this in next version to ensure that this works correctly even when -numa isn't specified. Regards, Bharata.