From: K Prateek Nayak <kprateek.nayak@amd.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
Dave Hansen <dave.hansen@linux.intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>, <x86@kernel.org>,
<linux-kernel@vger.kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
Ricardo Neri <ricardo.neri-calderon@linux.intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Mario Limonciello <mario.limonciello@amd.com>,
Meng Li <li.meng@amd.com>, Huang Rui <ray.huang@amd.com>,
"Gautham R. Shenoy" <gautham.shenoy@amd.com>
Subject: Re: [PATCH 6/8] sched/fair: Do not compute NUMA Balancing stats unnecessarily during lb
Date: Thu, 12 Dec 2024 17:13:55 +0530 [thread overview]
Message-ID: <b9315505-efcf-479f-b8ee-c660265c8e53@amd.com> (raw)
In-Reply-To: <CAKfTPtAd-0e4B6qh3e5VeK0N1Q+zsXkV5WdCunV6x9yzY7Y_Ow@mail.gmail.com>
Hello Vincent,
On 12/12/2024 4:35 PM, Vincent Guittot wrote:
> On Wed, 11 Dec 2024 at 19:58, K Prateek Nayak <kprateek.nayak@amd.com> wrote:
>>
>> Aggregate nr_numa_running and nr_preferred_running when load balancing
>> at NUMA domains only. While at it, also move the aggregation below the
>> idle_cpu() check since an idle CPU cannot have any preferred tasks.
>>
>> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
>> ---
>> kernel/sched/fair.c | 15 +++++++++------
>> 1 file changed, 9 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 2c4ebfc82917..ec2a79c8d0e7 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -10340,7 +10340,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>> bool *sg_overloaded,
>> bool *sg_overutilized)
>> {
>> - int i, nr_running, local_group;
>> + int i, nr_running, local_group, sd_flags = env->sd->flags;
>>
>> memset(sgs, 0, sizeof(*sgs));
>>
>> @@ -10364,10 +10364,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>> if (cpu_overutilized(i))
>> *sg_overutilized = 1;
>>
>> -#ifdef CONFIG_NUMA_BALANCING
>> - sgs->nr_numa_running += rq->nr_numa_running;
>> - sgs->nr_preferred_running += rq->nr_preferred_running;
>> -#endif
>> /*
>> * No need to call idle_cpu() if nr_running is not 0
>> */
>> @@ -10377,10 +10373,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>> continue;
>> }
>>
>> +#ifdef CONFIG_NUMA_BALANCING
>> + /* Only fbq_classify_group() uses this to classify NUMA groups */
>
> and fbq_classify_rq() which is also used by non-NUMA groups.
Yup but that just looks at rq's "nr_numa_running" and
"nr_preferred_running".
> AFAICT
> It doesn't change anything at the end because group type is "all" for
> non numa groups but we need some explanations why It's ok to skip numa
> stats and default behavior will remain unchanged
I'll elaborate that comment with complete details:
/*
* Only fbq_classify_group() uses these NUMA stats to derive the
* fbq_type of NUMA groups. By default, it is initialized to
* "all" - the highest type. sched_balance_find_src_rq() inhibits
* load balancing from runqueue whose fbq_type is found to be
* higher than the busiest group's fbq_type but since it is
* always initialized to the largest value, and remains same for
* non-NUMA groups, skip this aggregation when balancing at
* non-NUMA domains.
*/
--
Thanks and Regards,
Prateek
>
>> + if (sd_flags & SD_NUMA) {
>> + sgs->nr_numa_running += rq->nr_numa_running;
>> + sgs->nr_preferred_running += rq->nr_preferred_running;
>> + }
>> +#endif
>> if (local_group)
>> continue;
>>
>> - if (env->sd->flags & SD_ASYM_CPUCAPACITY) {
>> + if (sd_flags & SD_ASYM_CPUCAPACITY) {
>> /* Check for a misfit task on the cpu */
>> if (sgs->group_misfit_task_load < rq->misfit_task_load) {
>> sgs->group_misfit_task_load = rq->misfit_task_load;
>> --
>> 2.34.1
>>
next prev parent reply other threads:[~2024-12-12 11:44 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-11 18:55 [PATCH 0/8] x86, sched: Dynamic ITMT core ranking support and some yak shaving K Prateek Nayak
2024-12-11 18:55 ` [PATCH 1/8] x86/itmt: Convert "sysctl_sched_itmt_enabled" to boolean K Prateek Nayak
2024-12-12 18:09 ` Tim Chen
2024-12-11 18:55 ` [PATCH 2/8] x86/itmt: Use guard() for itmt_update_mutex K Prateek Nayak
2024-12-12 18:22 ` Tim Chen
2024-12-11 18:55 ` [PATCH 3/8] x86/itmt: Move the "sched_itmt_enabled" sysctl to debugfs K Prateek Nayak
2024-12-12 19:15 ` Tim Chen
2024-12-13 4:01 ` K Prateek Nayak
2024-12-13 17:19 ` Tim Chen
2024-12-13 9:16 ` Peter Zijlstra
2024-12-11 18:55 ` [PATCH 4/8] x86/topology: Remove x86_smt_flags and use cpu_smt_flags directly K Prateek Nayak
2024-12-12 21:05 ` Tim Chen
2024-12-11 18:55 ` [PATCH 5/8] x86/topology: Use x86_sched_itmt_flags for PKG domain unconditionally K Prateek Nayak
2024-12-13 21:07 ` Tim Chen
2024-12-11 18:55 ` [PATCH 6/8] sched/fair: Do not compute NUMA Balancing stats unnecessarily during lb K Prateek Nayak
2024-12-12 11:05 ` Vincent Guittot
2024-12-12 11:43 ` K Prateek Nayak [this message]
2024-12-12 13:28 ` Vincent Guittot
2024-12-13 14:55 ` Shrikanth Hegde
2024-12-11 18:55 ` [PATCH 7/8] sched/fair: Do not compute overloaded status " K Prateek Nayak
2024-12-12 9:56 ` Vincent Guittot
2024-12-12 11:01 ` K Prateek Nayak
2024-12-12 11:18 ` Vincent Guittot
2024-12-12 11:30 ` K Prateek Nayak
2024-12-13 14:57 ` Shrikanth Hegde
2024-12-13 19:51 ` K Prateek Nayak
2024-12-11 18:55 ` [RFC PATCH 8/8] sched/fair: Uncache asym_prefer_cpu and find it during update_sd_lb_stats() K Prateek Nayak
2024-12-13 15:02 ` Shrikanth Hegde
2024-12-13 20:00 ` K Prateek Nayak
2024-12-13 0:33 ` [PATCH 0/8] x86, sched: Dynamic ITMT core ranking support and some yak shaving Tim Chen
2024-12-13 4:12 ` K Prateek Nayak
2024-12-13 21:11 ` Tim Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b9315505-efcf-479f-b8ee-c660265c8e53@amd.com \
--to=kprateek.nayak@amd.com \
--cc=bp@alien8.de \
--cc=bsegall@google.com \
--cc=dave.hansen@linux.intel.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=hpa@zytor.com \
--cc=juri.lelli@redhat.com \
--cc=li.meng@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mario.limonciello@amd.com \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael.j.wysocki@intel.com \
--cc=ray.huang@amd.com \
--cc=ricardo.neri-calderon@linux.intel.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=tim.c.chen@linux.intel.com \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox