qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC] spapr: ensure that all threads within core are on the same NUMA node
@ 2017-02-22 17:24 Igor Mammedov
  2017-02-24  9:20 ` Igor Mammedov
  0 siblings, 1 reply; 2+ messages in thread
From: Igor Mammedov @ 2017-02-22 17:24 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-ppc, lvivier, David Gibson, thuth, mdroth, agraf

Threads within a core probably shouldn't be on different
NUMA nodes, so if user has misconfured command line make
fail to start and let user fix it.
For now use the first thread on the core as source
of core's node-id.

I'm suggesting this to make sure that it would be possible
later map legacy cpu-index based CLI to core-id based
internal map in possible_cpus array and completly eleminate
numa_info[XXX].node_cpu bitmaps, leaving only possible_cpus
as storage of mapping information.

CCing SPAPR mantainers for oppinion if enforcement
makas sense from plaform poin of view and if we could go
ahead with this or I should look for another way
to deal with legacy -numa CLI.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
CC: qemu-ppc@nongnu.org
CC: lvivier@redhat.com
CC: David Gibson <david@gibson.dropbear.id.au>
CC: thuth@redhat.com
CC: mdroth@linux.vnet.ibm.com
CC: agraf@suse.de
---
 hw/ppc/spapr_cpu_core.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/hw/ppc/spapr_cpu_core.c b/hw/ppc/spapr_cpu_core.c
index 55cd045..1499a8b 100644
--- a/hw/ppc/spapr_cpu_core.c
+++ b/hw/ppc/spapr_cpu_core.c
@@ -50,8 +50,6 @@ static void spapr_cpu_init(sPAPRMachineState *spapr, PowerPCCPU *cpu,
                            Error **errp)
 {
     CPUPPCState *env = &cpu->env;
-    CPUState *cs = CPU(cpu);
-    int i;
 
     /* Set time-base frequency to 512 MHz */
     cpu_ppc_tb_init(env, SPAPR_TIMEBASE_FREQ);
@@ -70,12 +68,6 @@ static void spapr_cpu_init(sPAPRMachineState *spapr, PowerPCCPU *cpu,
         }
     }
 
-    /* Set NUMA node for the added CPUs  */
-    i = numa_get_node_for_cpu(cs->cpu_index);
-    if (i < nb_numa_nodes) {
-            cs->numa_node = i;
-    }
-
     xics_cpu_setup(spapr->xics, cpu);
 
     qemu_register_reset(spapr_cpu_reset, cpu);
@@ -159,11 +151,13 @@ static void spapr_cpu_core_realize(DeviceState *dev, Error **errp)
     const char *typename = object_class_get_name(scc->cpu_class);
     size_t size = object_type_get_instance_size(typename);
     Error *local_err = NULL;
+    int core_node_id = numa_get_node_for_cpu(cc->core_id);;
     void *obj;
     int i, j;
 
     sc->threads = g_malloc0(size * cc->nr_threads);
     for (i = 0; i < cc->nr_threads; i++) {
+        int node_id;
         char id[32];
         CPUState *cs;
 
@@ -172,6 +166,19 @@ static void spapr_cpu_core_realize(DeviceState *dev, Error **errp)
         object_initialize(obj, size, typename);
         cs = CPU(obj);
         cs->cpu_index = cc->core_id + i;
+
+        /* Set NUMA node for the added CPUs  */
+        node_id = numa_get_node_for_cpu(cs->cpu_index);
+        if (node_id != core_node_id) {
+            error_setg(&local_err, "Invalid node-id=%d of thread[cpu-index: %d]"
+                " on CPU[core-id: %d, node-id: %d], node-id must be the same",
+                 node_id, cs->cpu_index, cc->core_id, core_node_id);
+            goto err;
+        }
+        if (node_id < nb_numa_nodes) {
+            cs->numa_node = node_id;
+        }
+
         snprintf(id, sizeof(id), "thread[%d]", i);
         object_property_add_child(OBJECT(sc), id, obj, &local_err);
         if (local_err) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-02-24  9:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-22 17:24 [Qemu-devel] [RFC] spapr: ensure that all threads within core are on the same NUMA node Igor Mammedov
2017-02-24  9:20 ` Igor Mammedov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).