From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42252) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YmbjD-0001FS-6z for qemu-devel@nongnu.org; Mon, 27 Apr 2015 01:37:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Ymbj9-00012Z-UA for qemu-devel@nongnu.org; Mon, 27 Apr 2015 01:37:11 -0400 Received: from e23smtp08.au.ibm.com ([202.81.31.141]:52305) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ymbj9-00010s-9J for qemu-devel@nongnu.org; Mon, 27 Apr 2015 01:37:07 -0400 Received: from /spool/local by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 27 Apr 2015 15:37:04 +1000 Date: Mon, 27 Apr 2015 11:06:07 +0530 From: Bharata B Rao Message-ID: <20150427053607.GC18380@in.ibm.com> References: <1429858066-12088-1-git-send-email-bharata@linux.vnet.ibm.com> <1429858066-12088-6-git-send-email-bharata@linux.vnet.ibm.com> <20150426114748.GB18380@in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150426114748.GB18380@in.ibm.com> Subject: Re: [Qemu-devel] [RFC PATCH v3 05/24] spapr: Reorganize CPU dt generation code Reply-To: bharata@linux.vnet.ibm.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: "Nikunj A. Dadhania" , aik@ozlabs.ru, mdroth@linux.vnet.ibm.com, agraf@suse.de, qemu-ppc@nongnu.org, tyreld@linux.vnet.ibm.com, nfont@linux.vnet.ibm.com, imammedo@redhat.com, afaerber@suse.de, david@gibson.dropbear.id.au On Sun, Apr 26, 2015 at 05:17:48PM +0530, Bharata B Rao wrote: > On Fri, Apr 24, 2015 at 12:17:27PM +0530, Bharata B Rao wrote: > > Reorganize CPU device tree generation code so that it be reused from > > hotplug path. CPU dt entries are now generated from spapr_finalize_fdt() > > instead of spapr_create_fdt_skel(). > > Creating CPU DT entries from spapr_finalize_fdt() instead of > spapr_create_fdt_skel() has an interesting side effect. > > > > In both the cases, I am adding CPU DT nodes from QEMU in the same order, > but not sure why the guest kernel discovers them in different orders in > each case. Nikunj and I tracked this down to the difference in device tree APIs that we are using in two cases. When CPU DT nodes are created from spapr_create_fdt_skel(), we are using fdt_begin_node() API which does sequential write and hence CPU DT nodes end up in the same order in which they are created. However in my patch when I create CPU DT entries in spapr_finalize_fdt(), I am using fdt_add_subnode() which ends up writing the CPU DT node at the same parent offset for all the CPUs. This results in CPU DT nodes being generated in reverse order in FDT. > > > +static void spapr_populate_cpus_dt_node(void *fdt, sPAPREnvironment *spapr) > > +{ > > + CPUState *cs; > > + int cpus_offset; > > + char *nodename; > > + int smt = kvmppc_smt_threads(); > > + > > + cpus_offset = fdt_add_subnode(fdt, 0, "cpus"); > > + _FDT(cpus_offset); > > + _FDT((fdt_setprop_cell(fdt, cpus_offset, "#address-cells", 0x1))); > > + _FDT((fdt_setprop_cell(fdt, cpus_offset, "#size-cells", 0x0))); > > + > > + CPU_FOREACH(cs) { > > + PowerPCCPU *cpu = POWERPC_CPU(cs); > > + int index = ppc_get_vcpu_dt_id(cpu); > > + DeviceClass *dc = DEVICE_GET_CLASS(cs); > > + int offset; > > + > > + if ((index % smt) != 0) { > > + continue; > > + } > > + > > + nodename = g_strdup_printf("%s@%x", dc->fw_name, index); > > + offset = fdt_add_subnode(fdt, cpus_offset, nodename); > > + g_free(nodename); > > + _FDT(offset); > > + spapr_populate_cpu_dt(cs, fdt, offset); > > + } > > I can simply fix this by walking the CPUs in reverse order in the above > code which makes the guest kernel to discover the CPU DT nodes in the > right order. > > s/CPU_FOREACH(cs)/CPU_FOREACH_REVERSE(cs) will solve this problem. Would this > be the right approach or should we just leave it to the guest kernel to > discover and enumerate CPUs in whatever order it finds the DT nodes in FDT ? So using CPU_FOREACH_REVERSE(cs) appears to be right way to handle this. Regards, Bharata.