public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chen, Yu C" <yu.c.chen@intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	<linux-kernel@vger.kernel.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	David Vernet <void@manifault.com>,
	"Gautham R. Shenoy" <gautham.shenoy@amd.com>,
	"Swapnil Sapkal" <swapnil.sapkal@amd.com>,
	Shrikanth Hegde <sshegde@linux.ibm.com>,
	"K Prateek Nayak" <kprateek.nayak@amd.com>, <yu.c.chen@intel.com>,
	<yu.chen.surf@foxmail.com>
Subject: Re: [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy
Date: Tue, 18 Mar 2025 02:23:10 +0800	[thread overview]
Message-ID: <0d5200bb-5070-4225-8166-20b8b63af33d@intel.com> (raw)
In-Reply-To: <20250317172536.GF6888@noisy.programming.kicks-ass.net>

On 3/18/2025 1:25 AM, Peter Zijlstra wrote:
> On Thu, Mar 13, 2025 at 09:37:38AM +0000, K Prateek Nayak wrote:
>> tl;dr
>>
>> This prototype is currently limited in the sense that it can only reuse
>> statistics for busy load balancing. Reusing stats for newidle load
>> balancing specifically ran into issues elaborated below.
> 
> Right, it makes sense for busy load balance, newidle I think:
> 
>> David had proposed SHARED_RUNQ [4] to improve on the shortcomings of
>> newidle balance for Meta's production workloads.
> 
> we need to look at this again. Something around the EEVDF merge made the
> thing unhappy -- if we figure out what and fix it, I think this makes

Could you give some links on what the issue is? The newly-idle balance 
fail to pull tasks after switching to EEVDF?(I don't
see the connection between EEVDF and newly-idle balance on top of
my head)

> more sense than trying to optimize the current scheme for newidle.
> 
> newidle really is about getting *any* work fast, which is a totally
> different game than the regular busy balancing.
> 

The newly idle iterates every CPU in the domain to find the busiest one, 
would the following work: find a relative busy CPU and stop the search, 
say, rq->nr_running >= 2 and also consider the candidate task's average 
duration.

thanks,
Chenyu

  reply	other threads:[~2025-03-17 18:23 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13  9:37 [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 1/8] sched/topology: Assign sd_share for all non NUMA sched domains K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 2/8] sched/topology: Introduce sg->shared K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 3/8] sched/fair: Move "struct sg_lb_stats" and its dependencies to sched.h K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 4/8] sched/fair: Move sg_{overloaded,overutilized} calculation to sg_lb_stats K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 5/8] sched/topology: Define sg_lb_stats_prop and embed it inside sched_domain_shared K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 6/8] sched/fair: Increase probability of lb stats being reused K Prateek Nayak
2025-03-17 18:07   ` Chen, Yu C
2025-03-19  6:51     ` K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 7/8] sched/fair: Retrieve cached group stats from sg_lb_stats_prop K Prateek Nayak
2025-03-17 18:04   ` Chen, Yu C
2025-03-19  6:42     ` K Prateek Nayak
2025-03-13  9:37 ` [RFC PATCH 8/8] sched/fair: Update stats for sched_domain using the sched_group stats K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 09/08] [ANNOTATE] sched/fair: Stats versioning and invalidation K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 10/08] sched/fair: Compute nr_{numa,preferred}_running for non-NUMA domains K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 11/08] sched/fair: Move from "last_update" to stats versioning K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 12/08] sched/fair: Record the cpu that updated the stats last K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 13/08] sched/fair: Invalidate stats once the load balancing instance is done K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 14/08] [DEBUG] sched/fair: Add more lb_stats around lb_time and stats reuse K Prateek Nayak
2025-03-16 10:29 ` [RFC PATCH 15/08] [DEBUG] tools/lib/perf: Extend schedstats v17 headers to include the new debug fields K Prateek Nayak
2025-03-17 17:25 ` [RFC PATCH 0/8] sched/fair: Propagate load balancing stats up the sched domain hierarchy Peter Zijlstra
2025-03-17 18:23   ` Chen, Yu C [this message]
2025-03-21 10:04 ` Libo Chen
2025-03-24  3:58   ` K Prateek Nayak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0d5200bb-5070-4225-8166-20b8b63af33d@intel.com \
    --to=yu.c.chen@intel.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gautham.shenoy@amd.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=sshegde@linux.ibm.com \
    --cc=swapnil.sapkal@amd.com \
    --cc=vincent.guittot@linaro.org \
    --cc=void@manifault.com \
    --cc=vschneid@redhat.com \
    --cc=yu.chen.surf@foxmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox