public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Madadi Vineeth Reddy <vineethr@linux.ibm.com>
To: "Chen, Yu C" <yu.c.chen@intel.com>
Cc: mingo@kernel.org, gautham.shenoy@amd.com, kprateek.nayak@amd.com,
	juri.lelli@redhat.com, vincent.guittot@linaro.org,
	dietmar.eggemann@arm.com, rostedt@goodmis.org,
	bsegall@google.com, mgorman@suse.de, vschneid@redhat.com,
	linux-kernel@vger.kernel.org, tim.c.chen@linux.intel.com,
	tglx@linutronix.de, Peter Zijlstra <peterz@infradead.org>,
	Madadi Vineeth Reddy <vineethr@linux.ibm.com>
Subject: Re: [RFC][PATCH] sched: Cache aware load-balancing
Date: Tue, 1 Apr 2025 01:47:26 +0530	[thread overview]
Message-ID: <0d7fa00e-587e-4ac8-90d0-115f30fdf0ac@linux.ibm.com> (raw)
In-Reply-To: <457a6070-b34e-4467-8251-f69c4015fccb@intel.com>

Hi Chen Yu,

On 27/03/25 16:44, Chen, Yu C wrote:
> Hi Madadi,
> 
> On 3/27/2025 10:43 AM, Madadi Vineeth Reddy wrote:
>> Hi Peter,
>>
>> On 25/03/25 17:39, Peter Zijlstra wrote:
>>> Hi all,
>>>
>>> One of the many things on the eternal todo list has been finishing the
>>> below hackery.
>>>
>>> It is an attempt at modelling cache affinity -- and while the patch
>>> really only targets LLC, it could very well be extended to also apply to
>>> clusters (L2). Specifically any case of multiple cache domains inside a
>>> node.
>>>
>>> Anyway, I wrote this about a year ago, and I mentioned this at the
>>> recent OSPM conf where Gautham and Prateek expressed interest in playing
>>> with this code.
>>>
>>> So here goes, very rough and largely unproven code ahead :-)
>>>
>>> It applies to current tip/master, but I know it will fail the __percpu
>>> validation that sits in -next, although that shouldn't be terribly hard
>>> to fix up.
>>>
>>> As is, it only computes a CPU inside the LLC that has the highest recent
>>> runtime, this CPU is then used in the wake-up path to steer towards this
>>> LLC and in task_hot() to limit migrations away from it.
>>>
>>> More elaborate things could be done, notably there is an XXX in there
>>> somewhere about finding the best LLC inside a NODE (interaction with
>>> NUMA_BALANCING).
>>
>> Tested the patch on a 12-core, 96-thread Power10 system using a real-life
>> workload, DayTrader.
> 
> Do all the Cores share the same LLC within 1 node? If this is the case,
> the regression might be due to over-migration/task stacking within 1 LLC/node. This patch should be modified that cache aware load balancing/wakeup will not be triggered if there is only 1 LLC within the node IMO.

Are you asking whether LLC is shared at the node level?

In Power10, the LLC is at the small core level, covering 4 threads.

In my test setup, there were 4 nodes, each with 24 CPUs, meaning there
were 6 LLCs per node.

Went through the patch in more detail and will check if task stacking
is an issue using micro-benchmarks.

Thanks for your feedback.

Thanks,
Madadi Vineeth Reddy

> 
> thanks,
> Chenyu
> 
>>
>> Here is a summary of the runs:
>>
>> Users | Instances | Throughput vs Base | Avg Resp. Time vs Base
>> --------------------------------------------------------------
>> 30    | 1        | -25.3%              | +50%
>> 60    | 1        | -25.1%              | +50%
>> 30    | 3        | -22.8%              | +33%
>>
>> As of now, the patch negatively impacts performance both in terms of
>> throughput and latency.
>>
>> I will conduct more extensive testing with both microbenchmarks and
>> real-life workloads.
>>
>> Thanks,
>> Madadi Vineeth Reddy
>>


  reply	other threads:[~2025-03-31 20:18 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-25 12:09 [RFC][PATCH] sched: Cache aware load-balancing Peter Zijlstra
2025-03-25 15:19 ` Chen, Yu C
2025-03-25 18:44   ` Peter Zijlstra
2025-03-26  6:18     ` K Prateek Nayak
2025-03-26  9:15       ` Chen, Yu C
2025-03-26  9:42         ` Peter Zijlstra
2025-03-27  8:10           ` Chen, Yu C
2025-03-26  9:38   ` Peter Zijlstra
2025-03-26 10:25     ` Peter Zijlstra
2025-03-26 10:42       ` Peter Zijlstra
2025-03-26 10:46       ` Peter Zijlstra
     [not found]       ` <20250327112059.3661-1-hdanton@sina.com>
2025-03-31  6:25         ` Chen, Yu C
2025-03-27  2:48     ` Chen, Yu C
2025-03-27  2:43 ` Madadi Vineeth Reddy
2025-03-27 11:14   ` Chen, Yu C
2025-03-31 20:17     ` Madadi Vineeth Reddy [this message]
2025-03-28 13:57 ` Abel Wu
2025-03-29 15:06   ` Chen, Yu C
2025-03-30  8:46     ` Abel Wu
2025-03-31  5:25       ` Chen, Yu C
2025-03-31  8:04         ` Abel Wu
2025-03-31 21:06 ` Tim Chen
2025-04-02  1:52 ` Libo Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0d7fa00e-587e-4ac8-90d0-115f30fdf0ac@linux.ibm.com \
    --to=vineethr@linux.ibm.com \
    --cc=457a6070-b34e-4467-8251-f69c4015fccb@intel.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gautham.shenoy@amd.com \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=tglx@linutronix.de \
    --cc=tim.c.chen@linux.intel.com \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox