public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities
@ 2026-04-23 16:53 Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 1/6] timers/migration: Fix another hotplug activation race Frederic Weisbecker
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Thomas Gleixner, Anna-Maria Behnsen,
	Sehee Jeong

Hi,

This is a late follow-up after:

	https://lore.kernel.org/lkml/20250910074251.8148-1-sehee1.jeong@samsung.com/

To summarize, heterogenous capacity CPUs migrate their timers
indifferently between big and little CPUs. And this happens to be often
migrated to big CPUs, increasing their idle target residency.

Thomas proposed to isolate the hierarchy between big and little CPUs.
So here is a try. Note I haven't tested on real heterogenous hardware
so if you have it, please test it!

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	timers/core

HEAD: f0a87af6dab6f3a6dd8a603a2b9d7dcc86fd50e4
Thanks,
	Frederic
---

Frederic Weisbecker (6):
      timers/migration: Fix another hotplug activation race
      timers/migration: Abstract out hierarchy to prepare for CPU capacity awareness
      timers/migration: Track CPUs in a hierarchy
      timers/migration: Split per-capacity hierarchies
      timers/migration: Handle capacity in connect tracepoints
      scripts/timers: Add timer_migration_tree.py

 include/trace/events/timer_migration.h |  24 ++--
 kernel/time/timer_migration.c          | 246 ++++++++++++++++++++++++---------
 kernel/time/timer_migration.h          |  19 +++
 scripts/timer_migration_tree.py        | 122 ++++++++++++++++
 4 files changed, 337 insertions(+), 74 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/6] timers/migration: Fix another hotplug activation race
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 2/6] timers/migration: Abstract out hierarchy to prepare for CPU capacity awareness Frederic Weisbecker
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

The hotplug control CPU is assumed to be active in the hierarchy but
that doesn't imply that the root is active. If the current CPU is not
the one that activated the current hierarchy, and the CPU performing
this duty is still halfway through the tree, the root may still be
observed inactive. And this can break the activation of a new root as in
the following scenario:

1) Initially, the whole system has 64 CPUs and only CPU 63 is awake.

                   [GRP1:0]
                    active
                  /    |    \
                 /     |     \
         [GRP0:0]    [...]    [GRP0:7]
           idle      idle      active
         /   |   \               |
     CPU 0  CPU 1  ...         CPU 63
     idle   idle               active

2) CPU 63 goes idle _but_ due to a #VMEXIT it hasn't yet reached the
   [GRP1:0]->parent dereference (that would be NULL and stop the walk)
   in __walk_groups_from().

                   [GRP1:0]
                     idle
                  /    |    \
                 /     |     \
         [GRP0:0]    [...]    [GRP0:7]
           idle      idle       idle
         /   |   \                |
     CPU 0  CPU 1  ...         CPU 63
     idle   idle                idle

3) CPU 1 wakes up, activates GRP0:0 but didn't yet manage to propagate
   up to GRP1:0 due to yet another #VMEXIT.

                   [GRP1:0]
                     idle
                  /    |    \
                 /     |     \
         [GRP0:0]    [...]    [GRP0:7]
         active      idle       idle
         /   |   \                |
     CPU 0  CPU 1  ...         CPU 63
     idle  active               idle

3) CPU 0 wakes up and doesn't need to walk above GRP0:0 as it's CPU 1
   role.

                   [GRP1:0]
                     idle
                  /    |    \
                 /     |     \
         [GRP0:0]    [...]    [GRP0:7]
         active      idle       idle
         /   |   \                |
     CPU 0  CPU 1  ...         CPU 63
    active  active              idle

4) CPU 0 boots CPU 64. It creates a new root for it.

                             [GRP2:0]
                               idle
                           /          \
                          /            \
                   [GRP1:0]           [GRP1:1]
                   idle                 idle
                  /    |    \                \
                 /     |     \                \
         [GRP0:0]    [...]    [GRP0:7]      [GRP0:8]
         active      idle       idle          idle
         /   |   \                |            |
     CPU 0  CPU 1  ...         CPU 63        CPU 64
    active  active              idle         offline

5) CPU 0 activates the new root, but note that GRP1:0 is still idle,
   waiting for CPU 1 to resume from #VMEXIT and activate it.

                             [GRP2:0]
                              active
                           /          \
                          /            \
                   [GRP1:0]           [GRP1:1]
                   idle                 idle
                  /    |    \                \
                 /     |     \                \
         [GRP0:0]    [...]    [GRP0:7]      [GRP0:8]
         active      idle       idle          idle
         /   |   \                |            |
     CPU 0  CPU 1  ...         CPU 63        CPU 64
    active  active              idle         offline

6) CPU 63 resumes after #VMEXIT and sees the new GRP1:0 parent.
   Therefore it propagates the stale inactive state of GRP1:0 up to
   GRP2:0.

                             [GRP2:0]
                              idle
                           /          \
                          /            \
                   [GRP1:0]           [GRP1:1]
                   idle                 idle
                  /    |    \                \
                 /     |     \                \
         [GRP0:0]    [...]    [GRP0:7]      [GRP0:8]
         active      idle       idle          idle
         /   |   \                |            |
     CPU 0  CPU 1  ...         CPU 63        CPU 64
    active  active              idle         offline

7) CPU 1 resumes after #VMEXIT and finally activates GRP1:0. But it
   doesn't observe its parent link because no ordering enforced that.
   Therefore GRP2:0 is spuriously left idle.

                             [GRP2:0]
                              idle
                           /          \
                          /            \
                   [GRP1:0]           [GRP1:1]
                   active                 idle
                  /    |    \                \
                 /     |     \                \
         [GRP0:0]    [...]    [GRP0:7]      [GRP0:8]
         active      idle       idle          idle
         /   |   \                |            |
     CPU 0  CPU 1  ...         CPU 63        CPU 64
    active  active              idle         offline

Such races are highly theoretical and the problem would solve itself
once the old root ever becomes idle again. But it still leaves a taste
of discomfort.

Fix it with enforcing a fully ordered atomic read of the old root state
before propagating the activate state up to the new root. It has a two
directions ordering effect:

* Acquire + release of the latest old root state: If the hotplug control
  CPU is not the one that woke up the old root, make sure to acquire its
  active state and propagate it upwards through the ordered chain of
  activation (the acquire pairs with the cmpxchg() in tmigr_active_up()
  and subsequent releases will pair with atomic_read_acquire() and
  smp_mb__after_atomic() in tmigr_inactive_up()).

* Release: If the hotplug control CPU is not the one that must wake up
  the old root, but the CPU covering that is lagging behind its duty,
  publish the links from the old root to the new parents. This way the
  lagging CPU will propagate the active state itself.

Fixes: 7ee988770326 ("timers: Implement the hierarchical pull model")
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 40 +++++++++++++++++++++++++----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 155eeaea4113..1d0d3a4058d5 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1860,19 +1860,37 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 		 *   child to the new parents. So tmigr_active_up() activates the
 		 *   new parents while walking up from the old root to the new.
 		 *
-		 * * It is ensured that @start is active, as this setup path is
-		 *   executed in hotplug prepare callback. This is executed by an
-		 *   already connected and !idle CPU. Even if all other CPUs go idle,
-		 *   the CPU executing the setup will be responsible up to current top
-		 *   level group. And the next time it goes inactive, it will release
-		 *   the new childmask and parent to subsequent walkers through this
-		 *   @child. Therefore propagate active state unconditionally.
+		 * * It is ensured that @start is active, (or on the way to be activated
+		 *   by another CPU that woke up before the current one) as this setup path
+		 *   is executed in hotplug prepare callback. This is executed by an already
+		 *   connected and !idle CPU in the hierarchy.
+		 *
+		 * * The below RmW atomic operation ensures that:
+		 *
+		 *   1) If the old root has been completely activated, the latest state is
+		 *      acquired (the below implicit acquire pairs with the implicit release
+		 *      from cmpxchg() in tmigr_active_up()).
+		 *
+		 *   2) If the old root is still on the way to be activated, the lagging behind
+		 *      CPU performing the activation will acquire the links up to the new root.
+		 *      (The below implicit release pairs with the implicit acquire from cmpxchg()
+		 *      in tmigr_active_up()).
+		 *
+		 *   3) Every subsequent CPU below the old root will acquire the new links while
+		 *      walking through the old root (The below implicit release pairs with the
+		 *      implicit acquire from cmpxchg() in either tmigr_active_up()) or
+		 *      tmigr_inactive_up().
 		 */
-		state.state = atomic_read(&start->migr_state);
-		WARN_ON_ONCE(!state.active);
+		state.state = atomic_fetch_or(0, &start->migr_state);
 		WARN_ON_ONCE(!start->parent);
-		data.childmask = start->groupmask;
-		__walk_groups_from(tmigr_active_up, &data, start, start->parent);
+		/*
+		 * If the state of the old root is inactive, another CPU is on its way to activate
+		 * it and propagate to the new root.
+		 */
+		if (state.active) {
+			data.childmask = start->groupmask;
+			__walk_groups_from(tmigr_active_up, &data, start, start->parent);
+		}
 	}
 
 	/* Root update */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/6] timers/migration: Abstract out hierarchy to prepare for CPU capacity awareness
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 1/6] timers/migration: Fix another hotplug activation race Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 3/6] timers/migration: Track CPUs in a hierarchy Frederic Weisbecker
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

In order to prepare for separating out CPUs from different capacities in
distinct hierarchies, create a hierarchy structure that group setup
must rely upon.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 100 +++++++++++++++++++++-------------
 kernel/time/timer_migration.h |  10 ++++
 2 files changed, 72 insertions(+), 38 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 1d0d3a4058d5..52e97b880b1c 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -102,7 +102,7 @@
  * active CPU/group information atomic_try_cmpxchg() is used instead and only
  * the per CPU tmigr_cpu->lock is held.
  *
- * During the setup of groups tmigr_level_list is required. It is protected by
+ * During the setup of groups, hier->level_list is required. It is protected by
  * @tmigr_mutex.
  *
  * When @timer_base->lock as well as tmigr related locks are required, the lock
@@ -416,13 +416,12 @@
  */
 
 static DEFINE_MUTEX(tmigr_mutex);
-static struct list_head *tmigr_level_list __read_mostly;
+
+static struct tmigr_hierarchy *hierarchy;
 
 static unsigned int tmigr_hierarchy_levels __read_mostly;
 static unsigned int tmigr_crossnode_level __read_mostly;
 
-static struct tmigr_group *tmigr_root;
-
 static DEFINE_PER_CPU(struct tmigr_cpu, tmigr_cpu);
 
 /*
@@ -1653,14 +1652,15 @@ static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
 	group->groupevt.ignore = true;
 }
 
-static struct tmigr_group *tmigr_get_group(int node, unsigned int lvl)
+static struct tmigr_group *tmigr_get_group(struct tmigr_hierarchy *hier,
+					   int node, unsigned int lvl)
 {
 	struct tmigr_group *tmp, *group = NULL;
 
 	lockdep_assert_held(&tmigr_mutex);
 
 	/* Try to attach to an existing group first */
-	list_for_each_entry(tmp, &tmigr_level_list[lvl], list) {
+	list_for_each_entry(tmp, &hier->level_list[lvl], list) {
 		/*
 		 * If @lvl is below the cross NUMA node level, check whether
 		 * this group belongs to the same NUMA node.
@@ -1694,14 +1694,15 @@ static struct tmigr_group *tmigr_get_group(int node, unsigned int lvl)
 	tmigr_init_group(group, lvl, node);
 
 	/* Setup successful. Add it to the hierarchy */
-	list_add(&group->list, &tmigr_level_list[lvl]);
+	list_add(&group->list, &hier->level_list[lvl]);
 	trace_tmigr_group_set(group);
 	return group;
 }
 
-static bool tmigr_init_root(struct tmigr_group *group, bool activate)
+static bool tmigr_init_root(struct tmigr_hierarchy *hier,
+			    struct tmigr_group *group, bool activate)
 {
-	if (!group->parent && group != tmigr_root) {
+	if (!group->parent && group != hier->root) {
 		/*
 		 * This is the new top-level, prepare its groupmask in advance
 		 * to avoid accidents where yet another new top-level is
@@ -1717,11 +1718,12 @@ static bool tmigr_init_root(struct tmigr_group *group, bool activate)
 
 }
 
-static void tmigr_connect_child_parent(struct tmigr_group *child,
+static void tmigr_connect_child_parent(struct tmigr_hierarchy *hier,
+				       struct tmigr_group *child,
 				       struct tmigr_group *parent,
 				       bool activate)
 {
-	if (tmigr_init_root(parent, activate)) {
+	if (tmigr_init_root(hier, parent, activate)) {
 		/*
 		 * The previous top level had prepared its groupmask already,
 		 * simply account it in advance as the first child. If some groups
@@ -1757,10 +1759,10 @@ static void tmigr_connect_child_parent(struct tmigr_group *child,
 	trace_tmigr_connect_child_parent(child);
 }
 
-static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
-			      struct tmigr_group *start, bool activate)
+static int tmigr_setup_groups(struct tmigr_hierarchy *hier, unsigned int cpu,
+			      unsigned int node, struct tmigr_group *start, bool activate)
 {
-	struct tmigr_group *group, *child, **stack;
+	struct tmigr_group *root = hier->root, *group, *child, **stack;
 	int i, top = 0, err = 0, start_lvl = 0;
 	bool root_mismatch = false;
 
@@ -1773,11 +1775,11 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 		start_lvl = start->level + 1;
 	}
 
-	if (tmigr_root)
-		root_mismatch = tmigr_root->numa_node != node;
+	if (root)
+		root_mismatch = root->numa_node != node;
 
 	for (i = start_lvl; i < tmigr_hierarchy_levels; i++) {
-		group = tmigr_get_group(node, i);
+		group = tmigr_get_group(hier, node, i);
 		if (IS_ERR(group)) {
 			err = PTR_ERR(group);
 			i--;
@@ -1799,7 +1801,7 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 		if (group->parent)
 			break;
 		if ((!root_mismatch || i >= tmigr_crossnode_level) &&
-		    list_is_singular(&tmigr_level_list[i]))
+		    list_is_singular(&hier->level_list[i]))
 			break;
 	}
 
@@ -1827,7 +1829,7 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 			tmc->tmgroup = group;
 			tmc->groupmask = BIT(group->num_children++);
 
-			tmigr_init_root(group, activate);
+			tmigr_init_root(hier, group, activate);
 
 			trace_tmigr_connect_cpu_parent(tmc);
 
@@ -1835,7 +1837,7 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 			continue;
 		} else {
 			child = stack[i - 1];
-			tmigr_connect_child_parent(child, group, activate);
+			tmigr_connect_child_parent(hier, child, group, activate);
 		}
 	}
 
@@ -1894,15 +1896,15 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 	}
 
 	/* Root update */
-	if (list_is_singular(&tmigr_level_list[top])) {
-		group = list_first_entry(&tmigr_level_list[top],
+	if (list_is_singular(&hier->level_list[top])) {
+		group = list_first_entry(&hier->level_list[top],
 					 typeof(*group), list);
 		WARN_ON_ONCE(group->parent);
-		if (tmigr_root) {
+		if (root) {
 			/* Old root should be the same or below */
-			WARN_ON_ONCE(tmigr_root->level > top);
+			WARN_ON_ONCE(root->level > top);
 		}
-		tmigr_root = group;
+		hier->root = group;
 	}
 out:
 	kfree(stack);
@@ -1910,18 +1912,48 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
 	return err;
 }
 
+static struct tmigr_hierarchy *tmigr_get_hierarchy(void)
+{
+	if (hierarchy)
+		return hierarchy;
+
+	hierarchy = kzalloc(sizeof(*hierarchy), GFP_KERNEL);
+	if (!hierarchy)
+		return ERR_PTR(-ENOMEM);
+
+	hierarchy->level_list = kzalloc_objs(struct list_head,
+					     tmigr_hierarchy_levels);
+	if (!hierarchy->level_list) {
+		kfree(hierarchy);
+		hierarchy = NULL;
+		return ERR_PTR(-ENOMEM);
+	}
+
+	for (int i = 0; i < tmigr_hierarchy_levels; i++)
+		INIT_LIST_HEAD(&hierarchy->level_list[i]);
+
+	return hierarchy;
+}
+
 static int tmigr_add_cpu(unsigned int cpu)
 {
-	struct tmigr_group *old_root = tmigr_root;
+	struct tmigr_hierarchy *hier;
+	struct tmigr_group *old_root;
 	int node = cpu_to_node(cpu);
 	int ret;
 
 	guard(mutex)(&tmigr_mutex);
 
-	ret = tmigr_setup_groups(cpu, node, NULL, false);
+	hier = tmigr_get_hierarchy();
+	if (IS_ERR(hier))
+		return PTR_ERR(hier);
+
+	old_root = hier->root;
+
+	ret = tmigr_setup_groups(hier, cpu, node, NULL, false);
 
 	/* Root has changed? Connect the old one to the new */
-	if (ret >= 0 && old_root && old_root != tmigr_root) {
+	if (ret >= 0 && old_root && old_root != hier->root) {
 		/*
 		 * The target CPU must never do the prepare work, except
 		 * on early boot when the boot CPU is the target. Otherwise
@@ -1935,7 +1967,7 @@ static int tmigr_add_cpu(unsigned int cpu)
 		 * otherwise the old root may not be active as expected.
 		 */
 		WARN_ON_ONCE(!per_cpu_ptr(&tmigr_cpu, raw_smp_processor_id())->available);
-		ret = tmigr_setup_groups(-1, old_root->numa_node, old_root, true);
+		ret = tmigr_setup_groups(hier, -1, old_root->numa_node, old_root, true);
 	}
 
 	return ret;
@@ -1970,7 +2002,7 @@ static int tmigr_cpu_prepare(unsigned int cpu)
 
 static int __init tmigr_init(void)
 {
-	unsigned int cpulvl, nodelvl, cpus_per_node, i;
+	unsigned int cpulvl, nodelvl, cpus_per_node;
 	unsigned int nnodes = num_possible_nodes();
 	unsigned int ncpus = num_possible_cpus();
 	int ret = -ENOMEM;
@@ -2017,14 +2049,6 @@ static int __init tmigr_init(void)
 	 */
 	tmigr_crossnode_level = cpulvl;
 
-	tmigr_level_list = kzalloc_objs(struct list_head,
-					tmigr_hierarchy_levels);
-	if (!tmigr_level_list)
-		goto err;
-
-	for (i = 0; i < tmigr_hierarchy_levels; i++)
-		INIT_LIST_HEAD(&tmigr_level_list[i]);
-
 	pr_info("Timer migration: %d hierarchy levels; %d children per group;"
 		" %d crossnode level\n",
 		tmigr_hierarchy_levels, TMIGR_CHILDREN_PER_GROUP,
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index 70879cde6fdd..77df422e5f9a 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -5,6 +5,16 @@
 /* Per group capacity. Must be a power of 2! */
 #define TMIGR_CHILDREN_PER_GROUP 8
 
+/**
+ * struct tmigr_hierarchy - a hierarchy associated to a given CPU capacity.
+ * @level_list:	Per level lists of tmigr groups
+ * @root:	The current root of the hierarchy
+ */
+struct tmigr_hierarchy {
+	struct list_head	*level_list;
+	struct tmigr_group	*root;
+};
+
 /**
  * struct tmigr_event - a timer event associated to a CPU
  * @nextevt:	The node to enqueue an event in the parent group queue
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/6] timers/migration: Track CPUs in a hierarchy
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 1/6] timers/migration: Fix another hotplug activation race Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 2/6] timers/migration: Abstract out hierarchy to prepare for CPU capacity awareness Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 4/6] timers/migration: Split per-capacity hierarchies Frederic Weisbecker
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

When a new root is created, the old root is connected to it and
propagates up its own assumed to be active state, since the hotplug
control CPU is itself active and part of the old root.

However with per-capacity hierarchies, this assumption won't be true
anymore because the hotplug control CPU calling the timer migration
prepare callback may not belong to the same hierarchy as the booting
CPU.

To solve this, track the available CPUs per hierarchies so that the
root connection can be offlined to safe CPUs.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 25 +++++++++++++++++++------
 kernel/time/timer_migration.h |  2 ++
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 52e97b880b1c..d0de9f64528e 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1921,18 +1921,25 @@ static struct tmigr_hierarchy *tmigr_get_hierarchy(void)
 	if (!hierarchy)
 		return ERR_PTR(-ENOMEM);
 
+	hierarchy->cpumask = kzalloc(cpumask_size(), GFP_KERNEL);
+	if (!hierarchy->cpumask)
+		goto err;
+
 	hierarchy->level_list = kzalloc_objs(struct list_head,
 					     tmigr_hierarchy_levels);
-	if (!hierarchy->level_list) {
-		kfree(hierarchy);
-		hierarchy = NULL;
-		return ERR_PTR(-ENOMEM);
-	}
+	if (!hierarchy->level_list)
+		goto err;
 
 	for (int i = 0; i < tmigr_hierarchy_levels; i++)
 		INIT_LIST_HEAD(&hierarchy->level_list[i]);
 
 	return hierarchy;
+err:
+	kfree(hierarchy->cpumask);
+	kfree(hierarchy);
+	hierarchy = NULL;
+
+	return ERR_PTR(-ENOMEM);
 }
 
 static int tmigr_add_cpu(unsigned int cpu)
@@ -1952,8 +1959,11 @@ static int tmigr_add_cpu(unsigned int cpu)
 
 	ret = tmigr_setup_groups(hier, cpu, node, NULL, false);
 
+	if (ret < 0)
+		return ret;
+
 	/* Root has changed? Connect the old one to the new */
-	if (ret >= 0 && old_root && old_root != hier->root) {
+	if (old_root && old_root != hier->root) {
 		/*
 		 * The target CPU must never do the prepare work, except
 		 * on early boot when the boot CPU is the target. Otherwise
@@ -1970,6 +1980,9 @@ static int tmigr_add_cpu(unsigned int cpu)
 		ret = tmigr_setup_groups(hier, -1, old_root->numa_node, old_root, true);
 	}
 
+	if (ret >= 0)
+		cpumask_set_cpu(cpu, hier->cpumask);
+
 	return ret;
 }
 
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index 77df422e5f9a..0cfbb8d799a6 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -8,10 +8,12 @@
 /**
  * struct tmigr_hierarchy - a hierarchy associated to a given CPU capacity.
  * @level_list:	Per level lists of tmigr groups
+ * @cpumask:	CPUs belonging to this hierarchy
  * @root:	The current root of the hierarchy
  */
 struct tmigr_hierarchy {
 	struct list_head	*level_list;
+	struct cpumask		*cpumask;
 	struct tmigr_group	*root;
 };
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/6] timers/migration: Split per-capacity hierarchies
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2026-04-23 16:53 ` [PATCH 3/6] timers/migration: Track CPUs in a hierarchy Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 5/6] timers/migration: Handle capacity in connect tracepoints Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 6/6] scripts/timers: Add timer_migration_tree.py Frederic Weisbecker
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

Systems with heterogeneous CPU capacities, such as big.LITTLE, have
reported power issues since the introduction of the new timer migration
code.

Timers migrate from small capacity CPUs to big ones, degrading their
target residency and thus overall power consumption.

Solve this with splitting hierarchies per CPU capacity. For example in
a big.LITTLE machine, split a single hierarchy in two: one for big
capacity CPUs and another one for small capacity CPUs. This way global
timers only migrate across CPUs of the same capacity.

For simplicity purpose, split hierarchies keep the same number of
possible levels as if there were a single hierarchy, even though the
CPUs are distributed between multiple hierarchies. This could be a
problem on NUMA systems with heterogeneous CPU capacities (provided that
ever exists yet) where useless intermediate nodes may be created.
Solving this properly will imply on boot to know in advance how many
capacities are available and the number of CPUs for each of them.

Reported-by: Sehee Jeong <sehee1.jeong@samsung.com>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 kernel/time/timer_migration.c | 125 +++++++++++++++++++++++++---------
 kernel/time/timer_migration.h |   7 ++
 2 files changed, 101 insertions(+), 31 deletions(-)

diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index d0de9f64528e..0a8c893353a2 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -417,7 +417,7 @@
 
 static DEFINE_MUTEX(tmigr_mutex);
 
-static struct tmigr_hierarchy *hierarchy;
+static LIST_HEAD(tmigr_hierarchy_list);
 
 static unsigned int tmigr_hierarchy_levels __read_mostly;
 static unsigned int tmigr_crossnode_level __read_mostly;
@@ -1893,6 +1893,12 @@ static int tmigr_setup_groups(struct tmigr_hierarchy *hier, unsigned int cpu,
 			data.childmask = start->groupmask;
 			__walk_groups_from(tmigr_active_up, &data, start, start->parent);
 		}
+	} else if (start) {
+		union tmigr_state state;
+
+		/* Remote activation assumes the whole target's hierarchy is inactive */
+		state.state = atomic_read(&start->migr_state);
+		WARN_ON_ONCE(state.active);
 	}
 
 	/* Root update */
@@ -1912,36 +1918,80 @@ static int tmigr_setup_groups(struct tmigr_hierarchy *hier, unsigned int cpu,
 	return err;
 }
 
-static struct tmigr_hierarchy *tmigr_get_hierarchy(void)
+static struct tmigr_hierarchy *tmigr_get_hierarchy(unsigned int capacity)
 {
-	if (hierarchy)
-		return hierarchy;
+	struct tmigr_hierarchy *hier = NULL, *iter;
 
-	hierarchy = kzalloc(sizeof(*hierarchy), GFP_KERNEL);
-	if (!hierarchy)
+	list_for_each_entry(iter, &tmigr_hierarchy_list, node) {
+		if (iter->capacity == capacity)
+			hier = iter;
+	}
+
+	if (hier)
+		return hier;
+
+	hier = kzalloc(sizeof(*hier), GFP_KERNEL);
+	if (!hier)
 		return ERR_PTR(-ENOMEM);
 
-	hierarchy->cpumask = kzalloc(cpumask_size(), GFP_KERNEL);
-	if (!hierarchy->cpumask)
+	hier->cpumask = kzalloc(cpumask_size(), GFP_KERNEL);
+	if (!hier->cpumask)
 		goto err;
 
-	hierarchy->level_list = kzalloc_objs(struct list_head,
-					     tmigr_hierarchy_levels);
-	if (!hierarchy->level_list)
+	hier->level_list = kzalloc_objs(struct list_head,
+					tmigr_hierarchy_levels);
+	if (!hier->level_list)
 		goto err;
 
 	for (int i = 0; i < tmigr_hierarchy_levels; i++)
-		INIT_LIST_HEAD(&hierarchy->level_list[i]);
+		INIT_LIST_HEAD(&hier->level_list[i]);
 
-	return hierarchy;
+	hier->capacity = capacity;
+	list_add_tail(&hier->node, &tmigr_hierarchy_list);
+
+	return hier;
 err:
-	kfree(hierarchy->cpumask);
-	kfree(hierarchy);
-	hierarchy = NULL;
+	kfree(hier->cpumask);
+	kfree(hier);
 
 	return ERR_PTR(-ENOMEM);
 }
 
+static int tmigr_connect_old_root(struct tmigr_hierarchy *hier, int cpu,
+				  struct tmigr_group *old_root,	bool activate)
+{
+	/*
+	 * The target CPU must never do the prepare work, except
+	 * on early boot when the boot CPU is the target. Otherwise
+	 * it may spuriously activate the old top level group inside
+	 * the new one (nevertheless whether old top level group is
+	 * active or not) and/or release an uninitialized childmask.
+	 */
+	WARN_ON_ONCE(cpu == smp_processor_id());
+	if (activate) {
+		/*
+		 * The current CPU is expected to be online in the hierarchy,
+		 * otherwise the old root may not be active as expected.
+		 */
+		WARN_ON_ONCE(!__this_cpu_read(tmigr_cpu.available));
+	}
+
+	return tmigr_setup_groups(hier, -1, old_root->numa_node, old_root, activate);
+}
+
+static long connect_old_root_work(void *arg)
+{
+	struct tmigr_group *old_root = arg;
+	struct tmigr_hierarchy *hier;
+	int cpu = smp_processor_id();
+
+	hier = tmigr_get_hierarchy(arch_scale_cpu_capacity(cpu));
+	if (IS_ERR(hier))
+		return PTR_ERR(hier);
+
+	return tmigr_connect_old_root(hier, cpu, old_root, true);
+}
+
 static int tmigr_add_cpu(unsigned int cpu)
 {
 	struct tmigr_hierarchy *hier;
@@ -1951,7 +2001,7 @@ static int tmigr_add_cpu(unsigned int cpu)
 
 	guard(mutex)(&tmigr_mutex);
 
-	hier = tmigr_get_hierarchy();
+	hier = tmigr_get_hierarchy(arch_scale_cpu_capacity(cpu));
 	if (IS_ERR(hier))
 		return PTR_ERR(hier);
 
@@ -1964,20 +2014,33 @@ static int tmigr_add_cpu(unsigned int cpu)
 
 	/* Root has changed? Connect the old one to the new */
 	if (old_root && old_root != hier->root) {
-		/*
-		 * The target CPU must never do the prepare work, except
-		 * on early boot when the boot CPU is the target. Otherwise
-		 * it may spuriously activate the old top level group inside
-		 * the new one (nevertheless whether old top level group is
-		 * active or not) and/or release an uninitialized childmask.
-		 */
-		WARN_ON_ONCE(cpu == raw_smp_processor_id());
-		/*
-		 * The (likely) current CPU is expected to be online in the hierarchy,
-		 * otherwise the old root may not be active as expected.
-		 */
-		WARN_ON_ONCE(!per_cpu_ptr(&tmigr_cpu, raw_smp_processor_id())->available);
-		ret = tmigr_setup_groups(hier, -1, old_root->numa_node, old_root, true);
+		guard(migrate)();
+
+		if (cpumask_test_cpu(smp_processor_id(), hier->cpumask)) {
+			/*
+			 * If the target belong to the same hierarchy, the old root is expected
+			 * to be active. Link and propagate to the new root.
+			 */
+			ret = tmigr_connect_old_root(hier, cpu, old_root, true);
+		} else {
+			int target = cpumask_first_and(hier->cpumask, tmigr_available_cpumask);
+
+			if (target < nr_cpu_ids) {
+				/*
+				 * If the target doesn't belong to the same hierarchy as the current
+				 * CPU, activate from a relevant one to make sure the old root is
+				 * active.
+				 */
+				ret = work_on_cpu(target, connect_old_root_work, old_root);
+			} else {
+				/*
+				 * No other available CPUs in the remote hierarchy. Link the
+				 * old root remotely but don't propagate activation since the
+				 * old root is not expected to be active.
+				 */
+				ret = tmigr_connect_old_root(hier, cpu, old_root, false);
+			}
+		}
 	}
 
 	if (ret >= 0)
diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h
index 0cfbb8d799a6..291bfb6adfc3 100644
--- a/kernel/time/timer_migration.h
+++ b/kernel/time/timer_migration.h
@@ -7,14 +7,21 @@
 
 /**
  * struct tmigr_hierarchy - a hierarchy associated to a given CPU capacity.
+ *                          Homogeneous systems have only one hierarchy.
+ *                          Heterogenous have one hierarchy per CPU capacity.
  * @level_list:	Per level lists of tmigr groups
  * @cpumask:	CPUs belonging to this hierarchy
  * @root:	The current root of the hierarchy
+ * @capacity:	CPU capacity associated to this hierarchy
+ * @node:	Node in the global hierarchy list
  */
 struct tmigr_hierarchy {
 	struct list_head	*level_list;
 	struct cpumask		*cpumask;
 	struct tmigr_group	*root;
+	unsigned long		capacity;
+	struct list_head	node;
+
 };
 
 /**
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/6] timers/migration: Handle capacity in connect tracepoints
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2026-04-23 16:53 ` [PATCH 4/6] timers/migration: Split per-capacity hierarchies Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  2026-04-23 16:53 ` [PATCH 6/6] scripts/timers: Add timer_migration_tree.py Frederic Weisbecker
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

This let tracers know to which hierarchy a CPU belongs to.

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 include/trace/events/timer_migration.h | 24 ++++++++++++++----------
 kernel/time/timer_migration.c          |  4 ++--
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/include/trace/events/timer_migration.h b/include/trace/events/timer_migration.h
index 61171b13c687..0b135e9301b1 100644
--- a/include/trace/events/timer_migration.h
+++ b/include/trace/events/timer_migration.h
@@ -33,15 +33,16 @@ TRACE_EVENT(tmigr_group_set,
 
 TRACE_EVENT(tmigr_connect_child_parent,
 
-	TP_PROTO(struct tmigr_group *child),
+	TP_PROTO(struct tmigr_hierarchy *hier, struct tmigr_group *child),
 
-	TP_ARGS(child),
+	TP_ARGS(hier, child),
 
 	TP_STRUCT__entry(
 		__field( void *,	child		)
 		__field( void *,	parent		)
 		__field( unsigned int,	lvl		)
 		__field( unsigned int,	numa_node	)
+		__field( unsigned int,	capacity	)
 		__field( unsigned int,	num_children	)
 		__field( u32,		groupmask	)
 	),
@@ -51,26 +52,28 @@ TRACE_EVENT(tmigr_connect_child_parent,
 		__entry->parent		= child->parent;
 		__entry->lvl		= child->parent->level;
 		__entry->numa_node	= child->parent->numa_node;
+		__entry->capacity	= hier->capacity;
 		__entry->num_children	= child->parent->num_children;
 		__entry->groupmask	= child->groupmask;
 	),
 
-	TP_printk("group=%p groupmask=%0x parent=%p lvl=%d numa=%d num_children=%d",
-		  __entry->child,  __entry->groupmask, __entry->parent,
-		  __entry->lvl, __entry->numa_node, __entry->num_children)
+	TP_printk("group=%p groupmask=%0x parent=%p lvl=%d numa=%d capacity=%d num_children=%d",
+		  __entry->child,  __entry->groupmask, __entry->parent, __entry->lvl,
+		  __entry->numa_node, __entry->capacity, __entry->num_children)
 );
 
 TRACE_EVENT(tmigr_connect_cpu_parent,
 
-	TP_PROTO(struct tmigr_cpu *tmc),
+	TP_PROTO(struct tmigr_hierarchy *hier, struct tmigr_cpu *tmc),
 
-	TP_ARGS(tmc),
+	TP_ARGS(hier, tmc),
 
 	TP_STRUCT__entry(
 		__field( void *,	parent		)
 		__field( unsigned int,	cpu		)
 		__field( unsigned int,	lvl		)
 		__field( unsigned int,	numa_node	)
+		__field( unsigned int,	capacity	)
 		__field( unsigned int,	num_children	)
 		__field( u32,		groupmask	)
 	),
@@ -80,13 +83,14 @@ TRACE_EVENT(tmigr_connect_cpu_parent,
 		__entry->cpu		= tmc->cpuevt.cpu;
 		__entry->lvl		= tmc->tmgroup->level;
 		__entry->numa_node	= tmc->tmgroup->numa_node;
+		__entry->capacity	= hier->capacity;
 		__entry->num_children	= tmc->tmgroup->num_children;
 		__entry->groupmask	= tmc->groupmask;
 	),
 
-	TP_printk("cpu=%d groupmask=%0x parent=%p lvl=%d numa=%d num_children=%d",
-		  __entry->cpu,	 __entry->groupmask, __entry->parent,
-		  __entry->lvl, __entry->numa_node, __entry->num_children)
+	TP_printk("cpu=%d groupmask=%0x parent=%p lvl=%d numa=%d capacity=%d num_children=%d",
+		  __entry->cpu,	 __entry->groupmask, __entry->parent, __entry->lvl,
+		  __entry->numa_node, __entry->capacity, __entry->num_children)
 );
 
 DECLARE_EVENT_CLASS(tmigr_group_and_cpu,
diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
index 0a8c893353a2..ec3ff80f795c 100644
--- a/kernel/time/timer_migration.c
+++ b/kernel/time/timer_migration.c
@@ -1756,7 +1756,7 @@ static void tmigr_connect_child_parent(struct tmigr_hierarchy *hier,
 	 */
 	smp_store_release(&child->parent, parent);
 
-	trace_tmigr_connect_child_parent(child);
+	trace_tmigr_connect_child_parent(hier, child);
 }
 
 static int tmigr_setup_groups(struct tmigr_hierarchy *hier, unsigned int cpu,
@@ -1831,7 +1831,7 @@ static int tmigr_setup_groups(struct tmigr_hierarchy *hier, unsigned int cpu,
 
 			tmigr_init_root(hier, group, activate);
 
-			trace_tmigr_connect_cpu_parent(tmc);
+			trace_tmigr_connect_cpu_parent(hier, tmc);
 
 			/* There are no children that need to be connected */
 			continue;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 6/6] scripts/timers: Add timer_migration_tree.py
  2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
                   ` (4 preceding siblings ...)
  2026-04-23 16:53 ` [PATCH 5/6] timers/migration: Handle capacity in connect tracepoints Frederic Weisbecker
@ 2026-04-23 16:53 ` Frederic Weisbecker
  5 siblings, 0 replies; 7+ messages in thread
From: Frederic Weisbecker @ 2026-04-23 16:53 UTC (permalink / raw)
  To: LKML; +Cc: Frederic Weisbecker, Anna-Maria Behnsen, Sehee Jeong,
	Thomas Gleixner

Introduce a script that provides a simple ascii representation of the
timer migration tree on top of boot trace events.

First boot with:

    trace_event==tmigr_connect_cpu_parent,tmigr_connect_child_parent

Then parse the result with:

scripts/timer_migration_tree.py < /sys/kernel/tracing/trace

On a system with 8 CPUs, this produces the following output:

    Tree for capacity 1024

                                      /-0, node 0, lvl:-1
                                     |
                                     |--1, node 0, lvl:-1
                                     |
                                     |--2, node 0, lvl:-1
                                     |
                                     |--3, node 0, lvl:-1
    -- /00000000dcebac8b, node 0, lvl:0
                                     |--4, node 0, lvl:-1
                                     |
                                     |--5, node 0, lvl:-1
                                     |
                                     |--6, node 0, lvl:-1
                                     |
                                      \-7, node 0, lvl:-1

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
 scripts/timer_migration_tree.py | 122 ++++++++++++++++++++++++++++++++
 1 file changed, 122 insertions(+)
 create mode 100755 scripts/timer_migration_tree.py

diff --git a/scripts/timer_migration_tree.py b/scripts/timer_migration_tree.py
new file mode 100755
index 000000000000..faac9de854bd
--- /dev/null
+++ b/scripts/timer_migration_tree.py
@@ -0,0 +1,122 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+
+"""
+Draw the timer migration tree.
+
+1) Boot with trace_event==tmigr_connect_cpu_parent,tmigr_connect_child_parent
+2) ./timer_migration_tree.py < /sys/kernel/tracing/trace
+"""
+
+import re, sys
+from ete3 import Tree
+
+class Node:
+	def __init__(self, group):
+		self.group = group
+		self.children = []
+		self.parent = None
+		self.num_children = 0
+		self.groupmask = 0
+		self.lvl = -1
+
+	def set_groupmask(self, groupmask):
+		self.groupmask = groupmask
+
+	def set_parent(self, parent):
+		self.parent = parent
+
+	def add_child(self, child):
+		self.children.append(child)
+
+	def set_lvl(self, lvl):
+		self.lvl = lvl
+
+	def set_numa(self, numa):
+		self.numa = numa
+
+	def set_num_children(self, num_children):
+		self.num_children = num_children
+
+	def __repr__(self):
+		if self.parent:
+			parent_grp = self.parent.group
+		else:
+			parent_grp = "-"
+		return "Group: %s mask: %s parent: %s lvl: %d numa: %d num_children: %d" % (self.group, self.groupmask, parent_grp, self.lvl, self.numa, self.num_children)
+
+hierarchies = { }
+
+def get_hierarchy(capacity):
+	if capacity not in hierarchies:
+		hierarchies[capacity] = {}
+	return hierarchies[capacity]
+
+def get_node(capacity, group):
+	hier = get_hierarchy(capacity)
+	if group in hier:
+		return hier[group]
+	else:
+		n = Node(group)
+		hier[group] = n
+		return n
+
+def tmigr_connect_cpu_parent(ts, line):
+	s = re.search("tmigr_connect_cpu_parent: cpu=([0-9]+) groupmask=([0-9a-zA-Z]+) parent=([0-9a-zA-Z]+) lvl=([0-9]+) numa=([-]?[0-9]+) capacity=([-]?[0-9]+) num_children=([0-9]+)", line)
+	if s is None:
+		return False
+	(cpu, groupmask, parent, lvl, numa, capacity, num_children) = (int(s.group(1)), s.group(2), s.group(3), int(s.group(4)), int(s.group(5)), int(s.group(6)), int(s.group(7)))
+	n = get_node(capacity, cpu)
+	p = get_node(capacity, parent)
+	n.set_parent(p)
+	n.set_groupmask(groupmask)
+	n.set_lvl(-1)
+	p.set_lvl(lvl)
+	p.set_numa(numa)
+	n.set_numa(numa)
+	p.set_num_children(num_children)
+	p.add_child(n)
+
+def tmigr_connect_child_parent(ts, line):
+	s = re.search("tmigr_connect_child_parent: group=([0-9a-zA-Z]+) groupmask=([0-9a-zA-Z]+) parent=([0-9a-zA-Z]+) lvl=([0-9]+) numa=([-]?[0-9]+) capacity=([-]?[0-9]+) num_children=([0-9]+)", line)
+	if s is None:
+		return False
+	(group, groupmask, parent, lvl, numa, capacity, num_children) = (s.group(1), s.group(2), s.group(3), int(s.group(4)), int(s.group(5)), int(s.group(6)), int(s.group(7)))
+	n = get_node(capacity, group)
+	p = get_node(capacity, parent)
+	n.set_parent(p)
+	n.set_groupmask(groupmask)
+	p.set_lvl(lvl)
+	p.set_numa(numa)
+	p.set_num_children(num_children)
+	p.add_child(n)
+
+def populate(enode, node):
+	enode = enode.add_child(name = node.group)
+	enode.add_feature("groupmask", "m:%s" % node.groupmask)
+	enode.add_feature("lvl", "lvl:%d" % node.lvl)
+	enode.add_feature("numa", "node %d" % node.numa)
+	enode.add_feature("num_children", "c=%d" % node.num_children)
+	for child in node.children:
+		populate(enode, child)
+
+if __name__ == "__main__":
+	for line in sys.stdin:
+		s = re.search("([0-9]+[.][0-9]{6}): (.+?)$", line, re.S)
+		if s is not None:
+			if tmigr_connect_cpu_parent(float(s.group(1)), s.group(2)):
+				continue
+			if tmigr_connect_child_parent(float(s.group(1)), s.group(2)):
+				continue
+
+	for cap in hierarchies:
+		h = hierarchies[cap]
+		print("Tree for capacity %d" % cap)
+		for k in h:
+			n = h[k]
+			while n.parent != None:
+				n = n.parent
+			root = Tree()
+			populate(root, n)
+			print(root.get_ascii(show_internal=True, attributes=["name", "numa", "lvl"]))
+			break
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-04-23 16:54 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-23 16:53 [PATCH 0/6] timers/migration: Handle heterogenous CPU capacities Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 1/6] timers/migration: Fix another hotplug activation race Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 2/6] timers/migration: Abstract out hierarchy to prepare for CPU capacity awareness Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 3/6] timers/migration: Track CPUs in a hierarchy Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 4/6] timers/migration: Split per-capacity hierarchies Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 5/6] timers/migration: Handle capacity in connect tracepoints Frederic Weisbecker
2026-04-23 16:53 ` [PATCH 6/6] scripts/timers: Add timer_migration_tree.py Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox