linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Patrick Bellasi <patrick.bellasi@arm.com>
To: Pavan Kondeti <pkondeti@codeaurora.org>
Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Paul Turner <pjt@google.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Juri Lelli <juri.lelli@redhat.com>, Todd Kjos <tkjos@android.com>,
	Joel Fernandes <joelaf@google.com>,
	Steve Muckle <smuckle@google.com>
Subject: Re: [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths
Date: Wed, 24 Jan 2018 19:31:38 +0000	[thread overview]
Message-ID: <20180124193138.GB5739@e110439-lin> (raw)
In-Reply-To: <20180124113342.GD30677@codeaurora.org>

On 24-Jan 17:03, Pavan Kondeti wrote:
> Hi Patrick,

Hi Pavan,


> On Tue, Jan 23, 2018 at 06:08:46PM +0000, Patrick Bellasi wrote:
> >  static unsigned long cpu_util_wake(int cpu, struct task_struct *p)
> >  {
> > -	unsigned long util, capacity;
> > +	long util, util_est;
> >  
> >  	/* Task has no contribution or is new */
> >  	if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
> > -		return cpu_util(cpu);
> > +		return cpu_util_est(cpu);
> >  
> > -	capacity = capacity_orig_of(cpu);
> > -	util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
> > +	/* Discount task's blocked util from CPU's util */
> > +	util = cpu_util(cpu) - task_util(p);
> > +	util = max(util, 0L);
> >  
> > -	return (util >= capacity) ? capacity : util;
> > +	if (!sched_feat(UTIL_EST))
> > +		return util;
> 
> At first, It is not clear to me why you are not clamping the capacity to
> CPU original capacity. It looks like it is not needed any more with
> commit f453ae2200b0 ("sched/fair: Consider RT/IRQ pressure in
> capacity_spare_wake()") inclusion.

Mainly because the above code now uses only cpu_util() which is already clamped
by capacity_orig_of().

However, you made me notice that in the few lines which follows, where I do:

> > +       /*
> > +        * These are the main cases covered:
> > +        * - if *p is the only task sleeping on this CPU, then:
> > +        *      cpu_util (== task_util) > util_est (== 0)
> > +        *   and thus we return:
> > +        *      cpu_util_wake = (cpu_util - task_util) = 0
> > +        *
> > +        * - if other tasks are SLEEPING on the same CPU, which is just waking
> > +        *   up, then:
> > +        *      cpu_util >= task_util
> > +        *      cpu_util > util_est (== 0)
> > +        *   and thus we discount *p's blocked utilization to return:
> > +        *      cpu_util_wake = (cpu_util - task_util) >= 0
> > +        *
> > +        * - if other tasks are RUNNABLE on that CPU and
> > +        *      util_est > cpu_util
> > +        *   then we use util_est since it returns a more restrictive
> > +        *   estimation of the spare capacity on that CPU, by just considering
> > +        *   the expected utilization of tasks already runnable on that CPU.
> > +        */
> > +       util_est = cpu_rq(cpu)->cfs.util_est_runnable;
> > +       util = max(util, util_est);
> > +
> > +       return util;

I should instead clamp util before returning it! ;-)

> May be a separate patch to remove  the clamping part?

No, I think we should keep cpu_util_wake clamped to not affect the existing
call sites. I just need to remove it where not needed (done) and add it where
needed (will do on the next iteration).

> Thanks,
> Pavan

Cheers Patrick

-- 
#include <best/regards.h>

Patrick Bellasi

  reply	other threads:[~2018-01-24 19:31 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-23 18:08 [PATCH v3 0/3] Utilization estimation (util_est) for FAIR tasks Patrick Bellasi
2018-01-23 18:08 ` [PATCH v3 1/3] sched/fair: add util_est on top of PELT Patrick Bellasi
2018-01-24 16:40   ` Joel Fernandes
2018-01-24 19:16     ` Patrick Bellasi
2018-01-24 22:06       ` Joel Fernandes
2018-01-29 16:36   ` Peter Zijlstra
2018-01-30 12:46     ` Patrick Bellasi
2018-01-30 13:04       ` Peter Zijlstra
2018-01-30 14:01         ` Peter Zijlstra
2018-02-05 17:49           ` Patrick Bellasi
2018-01-23 18:08 ` [PATCH v3 2/3] sched/fair: use util_est in LB and WU paths Patrick Bellasi
2018-01-24 11:33   ` Pavan Kondeti
2018-01-24 19:31     ` Patrick Bellasi [this message]
2018-01-25 14:33       ` Pavan Kondeti
2018-01-31 15:32         ` Patrick Bellasi
2018-01-23 18:08 ` [PATCH v3 3/3] sched/cpufreq_schedutil: use util_est for OPP selection Patrick Bellasi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180124193138.GB5739@e110439-lin \
    --to=patrick.bellasi@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=joelaf@google.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=morten.rasmussen@arm.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=pkondeti@codeaurora.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=smuckle@google.com \
    --cc=tkjos@android.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).