public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Shrikanth Hegde <sshegde@linux.ibm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>,
	mingo@kernel.org, vincent.guittot@linaro.org,
	linux-kernel@vger.kernel.org, juri.lelli@redhat.com,
	vschneid@redhat.com, tglx@linutronix.de,
	dietmar.eggemann@arm.com, frederic@kernel.org,
	longman@redhat.com
Subject: Re: [PATCH 1/2] sched/fair: consider hk_mask early in triggering ilb
Date: Fri, 20 Mar 2026 19:42:32 +0530	[thread overview]
Message-ID: <8b638b28-9fbf-40f9-8c5a-e6485d9aea2b@linux.ibm.com> (raw)
In-Reply-To: <20260320114312.GB3558198@noisy.programming.kicks-ass.net>



On 3/20/26 5:13 PM, Peter Zijlstra wrote:
> On Fri, Mar 20, 2026 at 02:49:30PM +0530, Shrikanth Hegde wrote:
>>
>>
>> On 3/20/26 9:07 AM, K Prateek Nayak wrote:
>>> Hello Shrikanth,
>>>
>>> On 3/19/2026 12:23 PM, Shrikanth Hegde wrote:
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index b19aeaa51ebc..02cca2c7a98d 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -7392,6 +7392,7 @@ static inline unsigned int cfs_h_nr_delayed(struct rq *rq)
>>>>    static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
>>>>    static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);
>>>>    static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
>>>> +static DEFINE_PER_CPU(cpumask_var_t, kick_ilb_tmpmask);
>>>
>>> nit. We can rename and reuse select_rq_mask. Wakeups happen with IRQs
>>> disabled and kick happens from the hrtimer handler so it should be safe
>>> to reuse that and save some space.
>>>
>>> Thoughts?
>>
>> May be. but it could be a confusing name. sched_tmpmask?
>>
>> We could similar stuff already to load_balance_mask, select_rq_mask.
>> So, i would prefer to keep it separate.
> 
> But then we keep growing this ad infinitum.
> 
> The more sensible option is to name them after the context and have
> get/put accessors that (for PROVE_LOCKING builds or so) verify the
> context and maybe even 'lock' them to make sure nobody is trying to use
> one for two things at the same time.
> 
> That should make it clearer whats what and improve reuse, no?
> 

We have these:

deadline.c:static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
ext_idle.c:static DEFINE_PER_CPU(cpumask_var_t, local_idle_cpumask);
ext_idle.c:static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask);
ext_idle.c:static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask);
fair.c:static DEFINE_PER_CPU(cpumask_var_t, load_balance_mask);
fair.c:static DEFINE_PER_CPU(cpumask_var_t, select_rq_mask);
fair.c:static DEFINE_PER_CPU(cpumask_var_t, should_we_balance_tmpmask);
rt.c:static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask)

Here,
1. load_balance_mask and should_we_balance_tmpmask are used in the same context
    all it cares is - I need two cpumask to play with.

2. local_cpu_mask, local_cpu_mask_dl, select_rq_mask are used in independent context - all it cares is
    all it cares is - I need one cpumask to play with.

3. ext_idle.c: local_* are used the same context.
    all it cares is - I need three cpumask to play with
    technically it is doable with one.(but that's a separate story)

1 and 2 are in interrupt disable section, 3 i am not completely sure.


I am wondering something like would make sense?


DEFINE_PER_CPU(cpumask_var_t, sched_tmp_cpumask1);
DEFINE_PER_CPU(cpumask_var_t, sched_tmp_cpumask2);
DEFINE_PER_CPU(cpumask_var_t, sched_tmp_cpumask3);

Request for a tmp cpumask with number?
i.e 2 would say sched_request_tmpmask(0)
     1 would sat sched_request_tmpmask(0), and then sched_request_tmpmask(1)
     3 would say sched_request_tmpmask(0), sched_request_tmpmask(1) and sched_request_tmpmask(2)

Do this if interrupts are disabled.
if interrupt are enabled, then maybe do allocation/free instead?

That would give us the get/put routines for all cases.

  reply	other threads:[~2026-03-20 14:12 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  6:53 [PATCH 0/2] sched/fair: Minor improvements while triggering idle load balance Shrikanth Hegde
2026-03-19  6:53 ` [PATCH 1/2] sched/fair: consider hk_mask early in triggering ilb Shrikanth Hegde
2026-03-19  8:15   ` Mukesh Kumar Chaurasiya
2026-03-19 13:13     ` Shrikanth Hegde
2026-03-19 22:58   ` Shubhang Kaushik
2026-03-20  2:47     ` Shrikanth Hegde
2026-03-20  3:37   ` K Prateek Nayak
2026-03-20  9:19     ` Shrikanth Hegde
2026-03-20 11:43       ` Peter Zijlstra
2026-03-20 14:12         ` Shrikanth Hegde [this message]
2026-03-20 14:28           ` Shrikanth Hegde
2026-03-19  6:53 ` [PATCH 2/2] sched/fair: get this cpu once in find_new_ilb Shrikanth Hegde
2026-03-19  8:18   ` Mukesh Kumar Chaurasiya
2026-03-19  9:20   ` Peter Zijlstra
2026-03-19 13:03     ` Shrikanth Hegde
2026-03-19 13:39       ` Peter Zijlstra
2026-03-20  3:40   ` K Prateek Nayak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8b638b28-9fbf-40f9-8c5a-e6485d9aea2b@linux.ibm.com \
    --to=sshegde@linux.ibm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=frederic@kernel.org \
    --cc=juri.lelli@redhat.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox