linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rohit Jain <rohit.k.jain@oracle.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@redhat.com, linux-kernel@vger.kernel.org,
	steven.sistare@oracle.com, dhaval.giani@oracle.com,
	joelaf@google.com, dietmar.eggemann@arm.com,
	vincent.guittot@linaro.org, morten.rasmussen@arm.com,
	eas-dev@lists.linaro.org
Subject: Re: [RESEND PATCH] sched/fair: consider RT/IRQ pressure in select_idle_sibling
Date: Fri, 9 Feb 2018 14:17:19 -0800	[thread overview]
Message-ID: <fc1b34ee-2c07-6c98-df63-763522b4d4d0@oracle.com> (raw)
In-Reply-To: <20180209125358.GO25201@hirez.programming.kicks-ass.net>



On 02/09/2018 04:53 AM, Peter Zijlstra wrote:
> On Mon, Jan 29, 2018 at 03:27:09PM -0800, Rohit Jain wrote:
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 26a71eb..ce5ccf8 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -5625,6 +5625,11 @@ static unsigned long capacity_orig_of(int cpu)
>>   	return cpu_rq(cpu)->cpu_capacity_orig;
>>   }
>>   
>> +static inline bool full_capacity(int cpu)
>> +{
>> +	return capacity_of(cpu) >= (capacity_orig_of(cpu)*3)/4;
>> +}
> I don't like that name; >.75 != 1.
>
> Maybe invert things and do something like:
>
> static inline bool reduced_capacity(int cpu)
> {
> 	return capacity_of(cpu) < (3*capacity_orig_of(cpu))/4;
> }

OK, I will change the name and invert the logic.

>> @@ -6110,11 +6116,13 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
>>   	for_each_cpu(cpu, cpu_smt_mask(target)) {
>>   		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>>   			continue;
>> +		if (idle_cpu(cpu) && (capacity_of(cpu) > max_cap)) {
>> +			max_cap = capacity_of(cpu);
>> +			rcpu = cpu;
>> +		}
> 		if (idle_cpu(cpu)) {
> 			if (!reduced_capacity(cpu))
> 				return cpu;
>
> 			if (capacity_cpu(cpu) > max_cap) {
> 				max_cap = capacity_cpu(cpu);
> 				rcpu = cpu;
> 			}
> 		}
>
> Would be more consistent, I think.

OK

>
>>   	}
>>   
>> -	return -1;
>> +	return rcpu;
>>   }
>
>
>> @@ -6143,6 +6151,8 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>>   	u64 time, cost;
>>   	s64 delta;
>>   	int cpu, nr = INT_MAX;
>> +	int best_cpu = -1;
>> +	unsigned int best_cap = 0;
> Randomly different names for the same thing as in select_idle_smt().
> Thinking up two different names for the same thing is more work; be more
> lazy.

OK, will be more consistent in v1

>
>>   	this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
>>   	if (!this_sd)
>> @@ -6173,8 +6183,15 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>>   			return -1;
>>   		if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>>   			continue;
>> +		if (idle_cpu(cpu)) {
>> +			if (full_capacity(cpu)) {
>> +				best_cpu = cpu;
>> +				break;
>> +			} else if (capacity_of(cpu) > best_cap) {
>> +				best_cap = capacity_of(cpu);
>> +				best_cpu = cpu;
>> +			}
>> +		}
> No need for the else. And you'll note you're once again inconsistent
> with your previous self.
>
> But here I worry about big.little a wee bit. I think we're allowed big
> and little cores on the same L3 these days, and you can't directly
> compare capacity between them.
>
> Morten / Dietmar, any comments?
>
>> @@ -6193,13 +6210,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>>   	struct sched_domain *sd;
>>   	int i;
>>   
>> -	if (idle_cpu(target))
>> +	if (idle_cpu(target) && full_capacity(target))
>>   		return target;
>>   
>>   	/*
>>   	 * If the previous cpu is cache affine and idle, don't be stupid.
>>   	 */
>> -	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev))
>> +	if (prev != target && cpus_share_cache(prev, target) && idle_cpu(prev) &&
>> +	    full_capacity(prev))
>>   		return prev;
> split before idle_cpu() for a better balance.


OK

Thanks,
Rohit

  parent reply	other threads:[~2018-02-09 22:29 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-29 23:27 [RESEND PATCH] sched/fair: consider RT/IRQ pressure in select_idle_sibling Rohit Jain
2018-01-30  3:39 ` Joel Fernandes
2018-01-30 19:47   ` Rohit Jain
2018-01-31  1:57     ` Joel Fernandes
2018-01-31 17:50       ` Rohit Jain
2018-02-06  6:50         ` Joel Fernandes
2018-02-06  6:51           ` Joel Fernandes
2018-02-06 17:41           ` Rohit Jain
2018-02-09 12:37           ` Peter Zijlstra
2018-02-06  6:42     ` Joel Fernandes
2018-02-06 17:36       ` Rohit Jain
2018-02-09 12:35   ` Peter Zijlstra
2018-02-09 12:53 ` Peter Zijlstra
2018-02-09 15:46   ` Dietmar Eggemann
2018-02-09 22:05     ` Rohit Jain
2018-02-14  9:11       ` Dietmar Eggemann
2018-02-09 22:17   ` Rohit Jain [this message]
2018-03-10 20:41   ` Rohit Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc1b34ee-2c07-6c98-df63-763522b4d4d0@oracle.com \
    --to=rohit.k.jain@oracle.com \
    --cc=dhaval.giani@oracle.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=eas-dev@lists.linaro.org \
    --cc=joelaf@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=steven.sistare@oracle.com \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).