public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: "Chen, Yu C" <yu.c.chen@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	 Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann	 <dietmar.eggemann@arm.com>,
	Ben Segall <bsegall@google.com>, Mel Gorman	 <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	Tim Chen	 <tim.c.chen@intel.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Libo Chen	 <libo.chen@oracle.com>,
	Abel Wu <wuyun.abel@bytedance.com>,
	Len Brown <len.brown@intel.com>,
	linux-kernel@vger.kernel.org,
	K Prateek Nayak	 <kprateek.nayak@amd.com>,
	"Gautham R . Shenoy" <gautham.shenoy@amd.com>,
	 Zhao Liu <zhao1.liu@intel.com>,
	Vinicius Costa Gomes <vinicius.gomes@intel.com>,
	Arjan Van De Ven	 <arjan.van.de.ven@intel.com>
Subject: Re: [PATCH v3 1/2] sched: Create architecture specific sched domain distances
Date: Mon, 15 Sep 2025 09:49:39 -0700	[thread overview]
Message-ID: <131ea54e7eb41d9d63d7aa4304aadf7719990892.camel@linux.intel.com> (raw)
In-Reply-To: <857e86a9-9007-4942-b005-1574c919ad6b@intel.com>

On Fri, 2025-09-12 at 13:24 +0800, Chen, Yu C wrote:
> On 9/12/2025 2:30 AM, Tim Chen wrote:
> > Allow architecture specific sched domain NUMA distances that can be
> > modified from NUMA node distances for the purpose of building NUMA
> > sched domains.
> > 
> > The actual NUMA distances are kept separately.  This allows for NUMA
> > domain levels modification when building sched domains for specific
> > architectures.
> > 
> > Consolidate the recording of unique NUMA distances in an array to
> > sched_record_numa_dist() so the function can be reused to record NUMA
> > distances when the NUMA distance metric is changed.
> > 
> > No functional change if there's no arch specific NUMA distances
> > are being defined.
> > 
> 
> [snip]
> 
> > +
> > +void sched_init_numa(int offline_node)
> > +{
> > +	struct sched_domain_topology_level *tl;
> > +	int nr_levels, nr_node_levels;
> > +	int i, j;
> > +	int *distances, *domain_distances;
> > +	struct cpumask ***masks;
> > +
> > +	if (sched_record_numa_dist(offline_node, numa_node_dist, &distances,
> > +				   &nr_node_levels))
> > +		return;
> > +
> > +	WRITE_ONCE(sched_avg_remote_numa_distance,
> > +		   avg_remote_numa_distance(offline_node));
> > +
> > +	if (sched_record_numa_dist(offline_node,
> > +				   arch_sched_node_distance, &domain_distances,
> > +				   &nr_levels)) {
> > +		kfree(distances);
> > +		return;
> > +	}
> > +	rcu_assign_pointer(sched_numa_node_distance, distances);
> > +	WRITE_ONCE(sched_max_numa_distance, distances[nr_node_levels - 1]);
> 
> [snip]
> 
> > @@ -2022,7 +2097,6 @@ void sched_init_numa(int offline_node)
> >   	sched_domain_topology = tl;
> >   
> >   	sched_domains_numa_levels = nr_levels;
> > -	WRITE_ONCE(sched_max_numa_distance, sched_domains_numa_distance[nr_levels - 1]);
> >   
> 
> Before this patch, sched_max_numa_distance is assigned a valid
> value at the end of sched_init_numa(), after sched_domains_numa_masks
>   and sched_domain_topology_level are successfully created or appended
> , the kzalloc() call should succeed.
> 
> Now we assign sched_max_numa_distance earlier, without considering
> the status of NUMA sched domains. I think this is intended, because
>   sched domains are only for generic load balancing, while
>   sched_max_numa_distance is for NUMA load balancing; in theory, they
> use different metrics in their strategies. Thus, this change should
> not cause any issues.

Yes, now sched_max_numa_distance is used in conjunction with
sched_numa_node_distance.  So putting them together is okay.

> 
>  From my understanding,
> 
> Reviewed-by: Chen Yu <yu.c.chen@intel.com>

Thanks.

Tim

> 
> thanks,
> Chenyu

  reply	other threads:[~2025-09-15 16:49 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-11 18:30 [PATCH v3 0/2] Fix NUMA sched domain build errors for GNR and CWF Tim Chen
2025-09-11 18:30 ` [PATCH v3 1/2] sched: Create architecture specific sched domain distances Tim Chen
2025-09-12  3:23   ` K Prateek Nayak
2025-09-15 16:44     ` Tim Chen
2025-09-17  6:45       ` K Prateek Nayak
2025-09-12  5:24   ` Chen, Yu C
2025-09-15 16:49     ` Tim Chen [this message]
2025-09-15 17:16     ` Tim Chen
2025-09-15 12:37   ` Peter Zijlstra
2025-09-15 17:13     ` Tim Chen
2025-09-15 20:04       ` Tim Chen
2025-09-11 18:30 ` [PATCH v3 2/2] sched: Fix sched domain build error for GNR, CWF in SNC-3 mode Tim Chen
2025-09-12  5:08   ` K Prateek Nayak
2025-09-15 17:15     ` Tim Chen
2025-09-12  5:39   ` Chen, Yu C
2025-09-12  9:23     ` K Prateek Nayak
2025-09-12 11:59       ` Chen, Yu C
2025-09-15 12:46   ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=131ea54e7eb41d9d63d7aa4304aadf7719990892.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=arjan.van.de.ven@intel.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gautham.shenoy@amd.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=len.brown@intel.com \
    --cc=libo.chen@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tim.c.chen@intel.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vinicius.gomes@intel.com \
    --cc=vschneid@redhat.com \
    --cc=wuyun.abel@bytedance.com \
    --cc=yu.c.chen@intel.com \
    --cc=zhao1.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox