From: K Prateek Nayak <kprateek.nayak@amd.com>
To: Libo Chen <libo.chen@oracle.com>, Ingo Molnar <mingo@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
Juri Lelli <juri.lelli@redhat.com>,
"Vincent Guittot" <vincent.guittot@linaro.org>,
Chen Yu <yu.c.chen@intel.com>, <linux-kernel@vger.kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>,
Steven Rostedt <rostedt@goodmis.org>,
Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
Valentin Schneider <vschneid@redhat.com>,
David Vernet <void@manifault.com>,
"Gautham R. Shenoy" <gautham.shenoy@amd.com>,
"Swapnil Sapkal" <swapnil.sapkal@amd.com>,
Shrikanth Hegde <sshegde@linux.ibm.com>
Subject: Re: [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy
Date: Mon, 24 Mar 2025 09:28:45 +0530 [thread overview]
Message-ID: <f988e37f-cb28-4ac3-9e93-1e5fa6750e59@amd.com> (raw)
In-Reply-To: <9af4bb66-83c4-4257-bfc3-dbcb2185a7e6@oracle.com>
Hello Libo,
Thank you for taking a look at the series and sorry for the late
response.
On 3/21/2025 3:34 PM, Libo Chen wrote:
>
>
> On 3/13/25 02:37, K Prateek Nayak wrote:
>
>> Benchmark results
>> =================
>>
>
> Hi Prateek,
>
> Definitely like the idea, esp. if we can pull this off on newidle lb
> which tends to be more problematic on systems with a large number
> of cores. But the data below on periodic lb isn't I guess as good as
> I expect. So I am wondering if the cost of update_[sd|sg]_lb_stats()
> actually went down as the result of the caching?
I have some numbers for versioning idea that I got working just before
OSPM in [1] The benchmark results don't move much but the total time
for newidle balance reduces by ~5% overall.
There is a ~30% overhead of aggregating and propagating the stats
upwards at SMT domain that offsets some of the benefits of propagation
at higher domains but I'm working to see if this can be reduced and
only done if required.
Some ideas were discussed at OSPM to reduce the overheads further and
shared the burden of busy load balancing across all CPUs in the domain
and I'll tackle that next.
If you have any benchmark where this shows up prominently, please do let
me know and I can try adding it to the bunch.
[1] https://lore.kernel.org/lkml/20250316102916.10614-1-kprateek.nayak@amd.com/
--
Thanks and Regards,
Prateek
>
> Thanks,
> Libo
prev parent reply other threads:[~2025-03-24 3:59 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-13 9:37 [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 1/8] sched/topology: Assign sd_share for all non NUMA sched domains K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 2/8] sched/topology: Introduce sg->shared K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 3/8] sched/fair: Move "struct sg_lb_stats" and its dependencies to sched.h K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 4/8] sched/fair: Move sg_{overloaded,overutilized} calculation to sg_lb_stats K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 5/8] sched/topology: Define sg_lb_stats_prop and embed it inside sched_domain_shared K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 6/8] sched/fair: Increase probability of lb stats being reused K Prateek Nayak
2025-03-17 18:07 ` Chen, Yu C
2025-03-19 6:51 ` K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 7/8] sched/fair: Retrieve cached group stats from sg_lb_stats_prop K Prateek Nayak
2025-03-17 18:04 ` Chen, Yu C
2025-03-19 6:42 ` K Prateek Nayak
2025-03-13 9:37 ` [RFC PATCH 8/8] sched/fair: Update stats for sched_domain using the sched_group stats K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 09/08] [ANNOTATE] sched/fair: Stats versioning and invalidation K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 10/08] sched/fair: Compute nr_{numa,preferred}_running for non-NUMA domains K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 11/08] sched/fair: Move from "last_update" to stats versioning K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 12/08] sched/fair: Record the cpu that updated the stats last K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 13/08] sched/fair: Invalidate stats once the load balancing instance is done K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 14/08] [DEBUG] sched/fair: Add more lb_stats around lb_time and stats reuse K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 15/08] [DEBUG] tools/lib/perf: Extend schedstats v17 headers to include the new debug fields K Prateek Nayak
2025-03-17 17:25 ` [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy Peter Zijlstra
2025-03-17 18:23 ` Chen, Yu C
2025-03-21 10:04 ` Libo Chen
2025-03-24 3:58 ` K Prateek Nayak [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f988e37f-cb28-4ac3-9e93-1e5fa6750e59@amd.com \
--to=kprateek.nayak@amd.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=libo.chen@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sshegde@linux.ibm.com \
--cc=swapnil.sapkal@amd.com \
--cc=vincent.guittot@linaro.org \
--cc=void@manifault.com \
--cc=vschneid@redhat.com \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox