Linux Power Management development
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Rafael Wysocki <rjw@rjwysocki.net>,
	linaro-kernel@lists.linaro.org, linux-kernel@vger.kernel.org,
	Vincent Guittot <vincent.guittot@linaro.org>,
	linux-pm@vger.kernel.org, Juri Lelli <Juri.Lelli@arm.com>,
	Dietmar.Eggemann@arm.com, Morten.Rasmussen@arm.com,
	patrick.bellasi@arm.com
Subject: Re: [RFC] sched: fair: Don't update CPU frequency too frequently
Date: Wed, 7 Jun 2017 17:36:55 +0530	[thread overview]
Message-ID: <20170607120655.GB11126@vireshk-i7> (raw)
In-Reply-To: <20170601122224.c324h4t7y3i4wr6e@hirez.programming.kicks-ass.net>

+ Patrick,

On 01-06-17, 14:22, Peter Zijlstra wrote:
> On Thu, Jun 01, 2017 at 05:04:27PM +0530, Viresh Kumar wrote:
> > This patch relocates the call to utilization hook from
> > update_cfs_rq_load_avg() to task_tick_fair().
> 
> That's not right. Consider hardware where 'setting' the DVFS is a
> 'cheap' MSR write, doing that once every 10ms (HZ=100) is absurd.

Yeah, that may be too much for such a platforms. Actually we (/me & Vincent)
were worried about the current location of the utilization update hooks and
believed that they are getting called way too often. But yeah, this patch
optimized it way too much.

One of the goals of this patch was to avoid doing small OPP updates from
update_load_avg() which can potentially block significant utilization changes
(and hence big OPP changes) while a task is attached or detached, etc.

> We spoke about this problem in Pisa, the proposed solution was having
> each driver provide a cost metric and the generic code doing a max
> filter over the window constructed from that cost metric.

So we want to compensate for the lost opportunities (due to rate_limit_us
window) by changing the OPP based on what has happened in the previous
rate_limit_us window. I am not sure how will that help.

Case 1: A periodic RT task runs for a small time in the rate_limit_us window and
        the timing is such that we (almost) never go to the max OPP because of
        rate_limit_us window.

        Wouldn't a better solution towards such a case is what Patrick [1]
        proposed earlier (i.e. ignore rate_limit_us for RT/DL tasks), as we will
        run at high OPP when we really needed it the most.


Case 2: A high utilization periodic CFS task runs for short duration and keeps
        on migrating to other CPUs. We miss the opportunity to update the OPP
        based on this tasks utilization because of rate_limit_us window and by
        the time we update the OPP again, this task is already migrated and so
        the utilization is low again.

        If the task has already migrated, why should we increase the OPP on
        assumption that this task will come back on this CPU? There are enough
        chances that the selected (higher) OPP will not be utilized by the
        current load on the CPU.

        Also if this CFS tasks runs once every 2 (or more) ticks on the same
        CPU, then we are back to the same problem again.

        1         2         3         4
        |---------|---------|---------|---------|

           T                   T

        1,2,3,4 are representing the events on which we try to update the OPP
        and are placed rate_limit_us distance apart. And the task T happens to
        run between 1-2 and 3-4. We will not change the frequency until the
        event 2 in this case as rate_limit_us window isn't over yet. We go to
        higher OPP on 2 (which is really wasted for the current loads) because T
        happened in the last window. On 3 we come back to the OPP proportional
        to the current load. And the next time T runs again, we are still stuck
        on the low OPP. So instead of fixing it, we made it worse by wasting
        power unnecessarily.

Is there any case I am missing that you are concerned about ?

-- 
viresh

[1] https://marc.info/?l=linux-kernel&m=148846976032099&w=2

  reply	other threads:[~2017-06-07 12:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-01 11:34 [RFC] sched: fair: Don't update CPU frequency too frequently Viresh Kumar
2017-06-01 12:22 ` Peter Zijlstra
2017-06-07 12:06   ` Viresh Kumar [this message]
2017-06-07 15:43     ` Morten Rasmussen
2017-06-07 21:55       ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170607120655.GB11126@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=Dietmar.Eggemann@arm.com \
    --cc=Juri.Lelli@arm.com \
    --cc=Morten.Rasmussen@arm.com \
    --cc=linaro-kernel@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=patrick.bellasi@arm.com \
    --cc=peterz@infradead.org \
    --cc=rjw@rjwysocki.net \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox