* [PATCH v2] sched/topology: Fix for_each_node_numadist() lockup with !CONFIG_NUMA
@ 2025-06-09 11:35 Andrea Righi
2025-06-09 15:03 ` Yury Norov
0 siblings, 1 reply; 3+ messages in thread
From: Andrea Righi @ 2025-06-09 11:35 UTC (permalink / raw)
To: Yury Norov; +Cc: Tejun Heo, David Vernet, Changwoo Min, linux-kernel
for_each_node_numadist() can lead to hard lockups on kernels built
without CONFIG_NUMA. For instance, the following was triggered by
sched_ext:
watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
...
RIP: 0010:_find_first_and_bit+0x8/0x60
...
Call Trace:
<TASK>
cpumask_any_and_distribute+0x49/0x80
pick_idle_cpu_in_node+0xcf/0x140
scx_bpf_pick_idle_cpu_node+0xaa/0x110
bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
bpf__sched_ext_ops_select_cpu+0x4b/0xb3
This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
(-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
as the condition node >= MAX_NUMNODES is never satisfied.
Prevent this by providing a stub implementation based on
for_each_node_mask() when CONFIG_NUMA is disabled, which can safely
processes the single available node while still honoring the unvisited
nodemask.
Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
include/linux/topology.h | 5 +++++
1 file changed, 5 insertions(+)
Changes in v2:
- Provide a stub implementation for the !CONFIG_NUMA case
- Link to v1: https://lore.kernel.org/all/20250603080402.170601-1-arighi@nvidia.com/
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 33b7fda97d390..97c4f5fc75038 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -304,12 +304,17 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
*
* Requires rcu_lock to be held.
*/
+#ifdef CONFIG_NUMA
#define for_each_node_numadist(node, unvisited) \
for (int __start = (node), \
(node) = nearest_node_nodemask((__start), &(unvisited)); \
(node) < MAX_NUMNODES; \
node_clear((node), (unvisited)), \
(node) = nearest_node_nodemask((__start), &(unvisited)))
+#else
+#define for_each_node_numadist(node, unvisited) \
+ for_each_node_mask((node), (unvisited))
+#endif
/**
* for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
--
2.49.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH v2] sched/topology: Fix for_each_node_numadist() lockup with !CONFIG_NUMA
2025-06-09 11:35 [PATCH v2] sched/topology: Fix for_each_node_numadist() lockup with !CONFIG_NUMA Andrea Righi
@ 2025-06-09 15:03 ` Yury Norov
2025-06-09 15:20 ` Andrea Righi
0 siblings, 1 reply; 3+ messages in thread
From: Yury Norov @ 2025-06-09 15:03 UTC (permalink / raw)
To: Andrea Righi; +Cc: Tejun Heo, David Vernet, Changwoo Min, linux-kernel
On Mon, Jun 09, 2025 at 01:35:36PM +0200, Andrea Righi wrote:
> for_each_node_numadist() can lead to hard lockups on kernels built
> without CONFIG_NUMA. For instance, the following was triggered by
> sched_ext:
>
> watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
> ...
> RIP: 0010:_find_first_and_bit+0x8/0x60
> ...
> Call Trace:
> <TASK>
> cpumask_any_and_distribute+0x49/0x80
> pick_idle_cpu_in_node+0xcf/0x140
> scx_bpf_pick_idle_cpu_node+0xaa/0x110
> bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
> bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
> bpf__sched_ext_ops_select_cpu+0x4b/0xb3
>
> This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
> (-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
> as the condition node >= MAX_NUMNODES is never satisfied.
>
> Prevent this by providing a stub implementation based on
> for_each_node_mask() when CONFIG_NUMA is disabled, which can safely
> processes the single available node while still honoring the unvisited
> nodemask.
>
> Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
> include/linux/topology.h | 5 +++++
> 1 file changed, 5 insertions(+)
>
> Changes in v2:
> - Provide a stub implementation for the !CONFIG_NUMA case
> - Link to v1: https://lore.kernel.org/all/20250603080402.170601-1-arighi@nvidia.com/
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 33b7fda97d390..97c4f5fc75038 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -304,12 +304,17 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> *
> * Requires rcu_lock to be held.
> */
> +#ifdef CONFIG_NUMA
While there, can you expand this optimization to MAX_NUMNODES == 1
case?
#if defined(CONFIG_NUMA) && (MAX_NUMNODES > 1)
With that:
Acked-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Thanks,
Yury
> #define for_each_node_numadist(node, unvisited) \
> for (int __start = (node), \
> (node) = nearest_node_nodemask((__start), &(unvisited)); \
> (node) < MAX_NUMNODES; \
> node_clear((node), (unvisited)), \
> (node) = nearest_node_nodemask((__start), &(unvisited)))
> +#else
> +#define for_each_node_numadist(node, unvisited) \
> + for_each_node_mask((node), (unvisited))
> +#endif
>
> /**
> * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> --
> 2.49.0
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH v2] sched/topology: Fix for_each_node_numadist() lockup with !CONFIG_NUMA
2025-06-09 15:03 ` Yury Norov
@ 2025-06-09 15:20 ` Andrea Righi
0 siblings, 0 replies; 3+ messages in thread
From: Andrea Righi @ 2025-06-09 15:20 UTC (permalink / raw)
To: Yury Norov; +Cc: Tejun Heo, David Vernet, Changwoo Min, linux-kernel
Hi Yury,
On Mon, Jun 09, 2025 at 11:03:29AM -0400, Yury Norov wrote:
> On Mon, Jun 09, 2025 at 01:35:36PM +0200, Andrea Righi wrote:
> > for_each_node_numadist() can lead to hard lockups on kernels built
> > without CONFIG_NUMA. For instance, the following was triggered by
> > sched_ext:
> >
> > watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 5
> > ...
> > RIP: 0010:_find_first_and_bit+0x8/0x60
> > ...
> > Call Trace:
> > <TASK>
> > cpumask_any_and_distribute+0x49/0x80
> > pick_idle_cpu_in_node+0xcf/0x140
> > scx_bpf_pick_idle_cpu_node+0xaa/0x110
> > bpf_prog_16ee5b1f077af006_pick_idle_cpu+0x57f/0x5de
> > bpf_prog_df2ce5cfac58ce09_bpfland_select_cpu+0x37/0xf4
> > bpf__sched_ext_ops_select_cpu+0x4b/0xb3
> >
> > This happens because nearest_node_nodemask() always returns NUMA_NO_NODE
> > (-1) when CONFIG_NUMA is disabled, causing the loop to never terminate,
> > as the condition node >= MAX_NUMNODES is never satisfied.
> >
> > Prevent this by providing a stub implementation based on
> > for_each_node_mask() when CONFIG_NUMA is disabled, which can safely
> > processes the single available node while still honoring the unvisited
> > nodemask.
> >
> > Fixes: f09177ca5f242 ("sched/topology: Introduce for_each_node_numadist() iterator")
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> > include/linux/topology.h | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> > Changes in v2:
> > - Provide a stub implementation for the !CONFIG_NUMA case
> > - Link to v1: https://lore.kernel.org/all/20250603080402.170601-1-arighi@nvidia.com/
> >
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 33b7fda97d390..97c4f5fc75038 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -304,12 +304,17 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > *
> > * Requires rcu_lock to be held.
> > */
> > +#ifdef CONFIG_NUMA
>
> While there, can you expand this optimization to MAX_NUMNODES == 1
> case?
> #if defined(CONFIG_NUMA) && (MAX_NUMNODES > 1)
Makes sense, will send a v3, thanks!
-Andrea
>
> With that:
>
> Acked-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
>
> Thanks,
> Yury
>
> > #define for_each_node_numadist(node, unvisited) \
> > for (int __start = (node), \
> > (node) = nearest_node_nodemask((__start), &(unvisited)); \
> > (node) < MAX_NUMNODES; \
> > node_clear((node), (unvisited)), \
> > (node) = nearest_node_nodemask((__start), &(unvisited)))
> > +#else
> > +#define for_each_node_numadist(node, unvisited) \
> > + for_each_node_mask((node), (unvisited))
> > +#endif
> >
> > /**
> > * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> > --
> > 2.49.0
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-06-09 15:20 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-09 11:35 [PATCH v2] sched/topology: Fix for_each_node_numadist() lockup with !CONFIG_NUMA Andrea Righi
2025-06-09 15:03 ` Yury Norov
2025-06-09 15:20 ` Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox