public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: paulmck@linux.vnet.ibm.com
Cc: Damien Wyart <damien.wyart@free.fr>, Ingo Molnar <mingo@elte.hu>,
	Mike Galbraith <efault@gmx.de>,
	linux-kernel@vger.kernel.org
Subject: Re: Very high CPU load when idle with 3.0-rc1
Date: Wed, 01 Jun 2011 18:58:33 +0200	[thread overview]
Message-ID: <1306947513.2497.624.camel@laptop> (raw)
In-Reply-To: <20110601143743.GA2274@linux.vnet.ibm.com>

On Wed, 2011-06-01 at 07:37 -0700, Paul E. McKenney wrote:

> > > I considered that, but working out when it is OK to deboost them is
> > > decidedly non-trivial. 
> > 
> > Where exactly is the problem there? The boost lasts for as long as it
> > takes to finish the grace period, right? There's a distinct set of
> > callbacks associated with each grace-period, right? In which case you
> > can de-boost your thread the moment you're done processing that set.
> > 
> > Or am I simply confused about how all this is supposed to work?
> 
> The main complications are: (1) the fact that it is hard to tell exactly
> which grace period to wait for, this one or the next one, and (2) the
> fact that callbacks get shuffled when CPUs go offline.

I can't say I would worry too much about 2, hotplug and RT don't really
go hand-in-hand anyway.

On 1 however, is that due to the boost condition? 

I must admit that my thought there is somewhat fuzzy since I just
realized I don't actually know the exact condition to start boosting,
but suppose we boost because the queue is too large, then waiting for
the current grace period might not reduce the queue length, as most
callbacks might actually be for the next.

If however the condition is grace period duration, then completion of
the current grace period is sufficient, since the whole boost condition
is defined as such. [ if the next is also exceeding the time limit,
that's a whole next boost ]

> That said, it might be possible if we are willing to live with some
> approximate behavior.  For example, always waiting for the next grace
> period (rather than the current one) to finish, and boosting through the
> extra callbacks in case where a given CPU "adopts" callbacks that must
> be boosted when that CPU also has some callbacks whose priority must be
> boosted and some that need not be.

That might make sense, but I must admit to not fully understanding the
whole current/next thing yet.

> The reason I am not all that excited about taking this approach is that
> it doesn't help worst-case latency.

Well, not running at the top most prio does help those tasks running at
a higher priority, so in that regard it does reduce the jitter for a
number of tasks.

Also, I guess there's the whole question of what prio to boost to which
I somehow totally forgot about, which is a non-trivial thing in its own
right, since there isn't really someone blocked on grace period
completion (although in the special case of someone calling sync_rcu it
is clear).

> Plus the current implementation is just a less-precise approximation.
> (Sorry, couldn't resist!)

Appreciated, on a similar note I still need to actually look at all this
(preempt) tree-rcu stuff to learn how exactly it works.


  reply	other threads:[~2011-06-01 16:55 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-30  5:59 Very high CPU load when idle with 3.0-rc1 Damien Wyart
2011-05-30 11:34 ` Peter Zijlstra
2011-05-30 12:17   ` Ingo Molnar
2011-05-30 13:10   ` Mike Galbraith
2011-05-30 16:23   ` Paul E. McKenney
2011-05-30 16:41     ` Paul E. McKenney
2011-05-30 16:47       ` Peter Zijlstra
2011-05-30 16:46     ` Peter Zijlstra
2011-05-30 21:29       ` Paul E. McKenney
2011-05-30 21:35         ` Peter Zijlstra
2011-05-31  1:45           ` Paul E. McKenney
2011-05-30 17:19     ` Peter Zijlstra
2011-05-30 21:28       ` Paul E. McKenney
2011-05-30 21:33         ` Peter Zijlstra
2011-05-31  1:45           ` Paul E. McKenney
2011-06-01 11:05             ` Peter Zijlstra
2011-06-01 14:37               ` Paul E. McKenney
2011-06-01 16:58                 ` Peter Zijlstra [this message]
2011-06-01 18:19                   ` Paul E. McKenney
2011-05-31 12:30   ` [tip:core/urgent] rcu: Cure load woes tip-bot for Peter Zijlstra
2011-05-30 11:50 ` Very high CPU load when idle with 3.0-rc1 Damien Wyart
2011-05-30 12:22 ` Morten P.D. Stevens

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1306947513.2497.624.camel@laptop \
    --to=peterz@infradead.org \
    --cc=damien.wyart@free.fr \
    --cc=efault@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox