public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/2] cpumask: Add for_each_cpu_from()
       [not found] ` <20240319185148.985729-2-kyle.meyer@hpe.com>
@ 2024-03-20 18:31   ` Yury Norov
  2024-03-20 20:22     ` Yury Norov
  0 siblings, 1 reply; 4+ messages in thread
From: Yury Norov @ 2024-03-20 18:31 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: andriy.shevchenko, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, russ.anderson, dimitri.sivanich,
	steve.wahl

On Tue, Mar 19, 2024 at 01:51:47PM -0500, Kyle Meyer wrote:
> Add for_each_cpu_from() as a generic cpumask macro.
> 
> for_each_cpu_from() is the same as for_each_cpu(), except it starts at
> @cpu instead of zero.
> 
> Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>

Acked-by: Kyle Meyer <kyle.meyer@hpe.com>

> ---
>  include/linux/cpumask.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
> index 1c29947db848..655211db38ff 100644
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -368,6 +368,16 @@ unsigned int __pure cpumask_next_wrap(int n, const struct cpumask *mask, int sta
>  #define for_each_cpu_or(cpu, mask1, mask2)				\
>  	for_each_or_bit(cpu, cpumask_bits(mask1), cpumask_bits(mask2), small_cpumask_bits)
>  
> +/**
> + * for_each_cpu_from - iterate over every cpu present in @mask, starting at @cpu
> + * @cpu: the (optionally unsigned) integer iterator
> + * @mask: the cpumask pointer
> + *
> + * After the loop, cpu is >= nr_cpu_ids.
> + */
> +#define for_each_cpu_from(cpu, mask)				\
> +	for_each_set_bit_from(cpu, cpumask_bits(mask), small_cpumask_bits)
> +
>  /**
>   * cpumask_any_but - return a "random" in a cpumask, but not this one.
>   * @mask: the cpumask to search
> -- 
> 2.44.0

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/2] sched/topology: Optimize topology_span_sane()
       [not found] ` <20240319185148.985729-3-kyle.meyer@hpe.com>
@ 2024-03-20 18:32   ` Yury Norov
  0 siblings, 0 replies; 4+ messages in thread
From: Yury Norov @ 2024-03-20 18:32 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: andriy.shevchenko, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, russ.anderson, dimitri.sivanich,
	steve.wahl

On Tue, Mar 19, 2024 at 01:51:48PM -0500, Kyle Meyer wrote:
> Optimize topology_span_sane() by removing duplicate comparisons.
> 
> The total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level).
> 
> Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>

Reviewed-by: Yury Norov <yury.norov@gmail.com>

> ---
>  kernel/sched/topology.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 99ea5986038c..b6bcafc09969 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
>  static bool topology_span_sane(struct sched_domain_topology_level *tl,
>  			      const struct cpumask *cpu_map, int cpu)
>  {
> -	int i;
> +	int i = cpu + 1;
>  
>  	/* NUMA levels are allowed to overlap */
>  	if (tl->flags & SDTL_OVERLAP)
> @@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
>  	 * breaking the sched_group lists - i.e. a later get_group() pass
>  	 * breaks the linking done for an earlier span.
>  	 */
> -	for_each_cpu(i, cpu_map) {
> -		if (i == cpu)
> -			continue;
> +	for_each_cpu_from(i, cpu_map) {
>  		/*
>  		 * We should 'and' all those masks with 'cpu_map' to exactly
>  		 * match the topology we're about to build, but that can only
> -- 
> 2.44.0

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 1/2] cpumask: Add for_each_cpu_from()
  2024-03-20 18:31   ` [PATCH 1/2] cpumask: Add for_each_cpu_from() Yury Norov
@ 2024-03-20 20:22     ` Yury Norov
  0 siblings, 0 replies; 4+ messages in thread
From: Yury Norov @ 2024-03-20 20:22 UTC (permalink / raw)
  To: Kyle Meyer
  Cc: andriy.shevchenko, linux, mingo, peterz, juri.lelli,
	vincent.guittot, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, russ.anderson, dimitri.sivanich,
	steve.wahl

On Wed, Mar 20, 2024 at 11:31:18AM -0700, Yury Norov wrote:
> On Tue, Mar 19, 2024 at 01:51:47PM -0500, Kyle Meyer wrote:
> > Add for_each_cpu_from() as a generic cpumask macro.
> > 
> > for_each_cpu_from() is the same as for_each_cpu(), except it starts at
> > @cpu instead of zero.
> > 
> > Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
> 
> Acked-by: Kyle Meyer <kyle.meyer@hpe.com>

Sorry, please read: 

Acked-by: Yury Norov <yury.norov@gmail.com>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 0/2] sched/topology: Optimize topology_span_sane()
       [not found] <20240319185148.985729-1-kyle.meyer@hpe.com>
       [not found] ` <20240319185148.985729-2-kyle.meyer@hpe.com>
       [not found] ` <20240319185148.985729-3-kyle.meyer@hpe.com>
@ 2024-04-09  8:31 ` Valentin Schneider
  2 siblings, 0 replies; 4+ messages in thread
From: Valentin Schneider @ 2024-04-09  8:31 UTC (permalink / raw)
  To: Kyle Meyer, yury.norov, andriy.shevchenko, linux, mingo, peterz,
	juri.lelli, vincent.guittot, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, linux-kernel
  Cc: russ.anderson, dimitri.sivanich, steve.wahl, Kyle Meyer

On 19/03/24 13:51, Kyle Meyer wrote:
> A soft lockup is being detected in build_sched_domains() on 32 socket
> Sapphire Rapids systems with 3840 processors.
>
> topology_span_sane(), called by build_sched_domains(), checks that each
> processor's non-NUMA scheduling domains are completely equal or
> completely disjoint. If a non-NUMA scheduling domain partially overlaps
> another, scheduling groups can break.
>
> This series adds for_each_cpu_from() as a generic cpumask macro to
> optimize topology_span_sane() by removing duplicate comparisons. The
> total number of comparisons is reduced from N * (N - 1) to
> N * (N - 1) / 2 (per non-NUMA scheduling domain level), decreasing the
> boot time by approximately 20 seconds and preventing the soft lockup on
> the mentioned systems.
>
> Kyle Meyer (2):
>   cpumask: Add for_each_cpu_from()
>   sched/topology: Optimize topology_span_sane()

I somehow never got 2/2, and it doesn't show up on lore.kernel.org
either. I can see it from Yury's reply and it looks OK to me, but you'll
have to resend it for maintainers to be able to pick it up.

>
>  include/linux/cpumask.h | 10 ++++++++++
>  kernel/sched/topology.c |  6 ++----
>  2 files changed, 12 insertions(+), 4 deletions(-)
>
> -- 
> 2.44.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-04-09  8:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20240319185148.985729-1-kyle.meyer@hpe.com>
     [not found] ` <20240319185148.985729-2-kyle.meyer@hpe.com>
2024-03-20 18:31   ` [PATCH 1/2] cpumask: Add for_each_cpu_from() Yury Norov
2024-03-20 20:22     ` Yury Norov
     [not found] ` <20240319185148.985729-3-kyle.meyer@hpe.com>
2024-03-20 18:32   ` [PATCH 2/2] sched/topology: Optimize topology_span_sane() Yury Norov
2024-04-09  8:31 ` [PATCH 0/2] " Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox